Upgrading: You can say that it will be hard to find a match to your video card down the road. Recall the recent move to dual channel memory. I had no problems all finding a match to my 256mb stick at the time. Same when I had a VP6, gathering up matched cpus was no great chore.
You don't need to match RAM for DC memory. Any two sticks the same size that run the same speed *should* work. Most dual-CPU boards are the same; even if the chips are a different stepping, it should work as long as they're the same speed. OTOH, I'm not certain why you have to perfectly match your video cards, but apparently you do.
Downgrading: When my VP6 was relegated to a lesser use machine, I popped out the 2nd cpu and a stick of the ram to partially fund my upgrade. I would have loved to be able to split the video card as well. In Rollo's case, he has passed on an X850, when he got the SLIs. As a parent of teen age game playing rats, I like my kids to be able to enjoy the finer things in life too. A 6600GT would serve them just as well at the max res of their monitor as 1/2 of an X850, but that cannot be done without modularity.
Well, I haven't heard any plans for modular video cards, but the idea does get tossed around every now and then, and if they ever stop completely overhauling their designs every 12-18 months, this could become more feasable.
Gpus are already at 200+mil size only to grow. Would it not be easier to manufacture two smaller cores at say 60-70% the size without defects. ie: more cores to speed bin and match, less huge cores to toss that are defective?
I personally think they're going to go in this direction (either multi-core, multiple GPUs on one card, or both). Building monolithic 32-pipeline chips is going to be a nightmare in terms of yields, and they'll be huge even at 90nm.
SLI is a good technology for people who need more speed than a single high-end card can provide (consumers with money to burn, CAD/CAM professionals for whom another $500 to nearly double the GPU's speed is a damn good deal). If you somehow knew that graphics cards wouldn't get dramatically faster or add any new features for 2-3 years, it would be a decent upgrade path. But as long as manufacturers are putting out a new core design and doubling overall speed every 2 years (or less, sometimes!), it seems a little silly to lock yourself into an old architecture, especially when you're paying a premium for it now. IMO, anyway.