I get along with people on these forums, and I may "know some things," but I don't know all the things there are to know.
Someone had asked "how" or "why (you would want)" to run the memory at a higher bus speed than the motherboard FSB. Someone -- maybe it was Graysky(the Guru -- no joke) -- noted that the FSB has always been a bottleneck.
There are memory operations that take place without pushing a lot of data through the CPU. There is a feature that has been around a long time now, called "Direct Memory Access" [thus, motherboards have "DMA" settings.]
Assuming that data is being transferred to a device (i.e., hard disk) or processed through the CPU (requiring the bottlenecked FSB), there are going to be unused clock cycles when memory is run at a ratio different from 1:1.
On the DDR2 angle, the "skinny" is that latency settings "aren't as important as they were with DDR," but they still buy bandwidth. Also, voltage increases for memory are not only inevitable as you push up the FSB; they are likely required when tightening the latencies at any given FSB speed. I've also discovered that there are greater voltage requirements when running "1:1" than with running a ratio of 4:5. So -- at 1:1 and lower FSB with tighter latencies that you would think more accommodating to the lower FSB, the voltage requirement I've seen to be 0.025 to 0.050V higher at the low FSB, tight latency setting. With a 4:5 ratio and the "tightest" latencies I can get to achieve the same or higher synthetic bandwidth, I'm running a significantly higher memory FSB, looser latencies (true), but lower voltage. I can tweak the combination of memory FSB, latencies and voltage to get a higher bandwidth result -- possibly with 0.025+V lower than a lower bandwidth result at lower FSB and tighter latencies.
You can think of this as a big stairway. There is a range of FSB speeds with any given CPU : RAM ratio for a particular regime of latency settings. As you move up the FSB and attempt to keep the same latency settings, it requires more voltage. At some point, there is a threshold where you have to loosen the latency settings to go higher, or you push the voltage over the manufacturer maximum spec.
So before I go further and get another nomination for "member most likely to write a book when a few sentences will do --" --
The synthetic bandwidths are one basis of comparison to "keep score" on how you're doing. Even though they're synthetic, they have led me to settings where I can get marginal improvements in actual game-play -- better scores -- even with CPU : RAM ratios that are not 1:1.
1:1 is an ideal. But you impose your own limits as to how high you'll push the CPU VCORE (and risk early mortality), how high you'll push the VDIMM (risking "no replacement" under manufacturer limited lifetime warranty), and how far you'll push the motherboard, chipset, HT/MCH voltage etc. -- and risk a dead circuit board with that. So at some point, you may decide to use a different ratio -- given some remarks or observations I made above, and the limitations of your parts and budget.
While I don't have the time now, I think there's a logical explanation why some CPU : RAM ratios are better than others, as mentioned in a recent Anandtech article on OC'ing the QX9650. I'm guessing that it has to do with the integer arithmetic of latencies and data-bursts and the number of bytes processed in any given memory operation -- so that some ratios optimize the movement of data more than others, even if the ratios aren't "1-to-1."
Somebody else can explore this or correct me, and I'd be interested in reading their thoughts on the matter.