I'll be trying all this in the next few weeks. Meanwhile,
(1) Time trials show that while memory speeds matter, cpu speeds matter far more. It depends on the application, but I've always seen a 10:1 ratio or better, saying get the cpu speed as high as it goes, then dumb down the memory subsystem until it can keep up.
(2) The "stunt" suicide run overclocks on the forums we make fun of all dumb down the memory subsystem.
(3) We're all sure that it's the memory subsystem that can't handle the traffic. To be completely simplistic, say we're doubling the traffic by going to 8 GB. This argues that we should dumb down the memory subsystem by half. By (1), that will have only a secondary effect on performance.
So why do I get the feeling that no one has slowed their memory to a crawl? Go look again at 4 Ghz suicide runs, with memory at 7-7-7-x, or worse.
Overclocking is like flying blindfolded in a large dimensional cave, with the only indications of the shape coming when we bang into a wall. It makes the six dimensional sport of windsurfing look like child's play. Yet standard operating procedure seems to be to fly in a preferred straight line until we bang into a wall, then step back.
I got the idea from Kris Boughton's articles to map Vcore over a range of speeds up to my target speed. I make a spreadsheet grid, with one axis giving the voltage steps offered by my BIOS, and the other axis my FSB frequency, in steps of 3 or so, with all other settings held constant. Mark cells that can or can't boot and pass five minutes of a stability test. Every now and then, take a break, run a cell 24 hours and mark it as stable. One sees a very clean curve. Try voltages under stock; the curve goes both ways.
This curve does eventually spike upward, but other factors intervene, first: Overclocks near 3.6 Ghz need other mobo voltages tweaked, etc. But one's first reaction is always "needs salt", let's up Vcore, that'll force it to work. Seeing the entire curve, I can see when other factors start to intervene, when a knee-jerk Vcore fix stops making sense, I instead need to learn something else.
In other words, I'm mapping how a two-dimensional plane intersects the overclocking cave, giving me much better information about shape than just using sample lines.
Here, the two dimensional plane of interest is cpu frequency, and memory traffic. Each of these is a subsystem. One already needs to know (as people here do) what Vcore it takes for the cpu frequencies one wants. Calming memory involves basically all memory parameters, e.g. taking 4-4-4-x in steps down to 7-7-7-y, raising tRD and tRFC, etc. If these aren't all calmed down at once, getting most of them right won't matter.
Think of Apollo 13. Three guys would have died up in that can, if engineers on the ground hadn't manage to calm down every subsystem to get under their revised power budget. We've got to throttle memory traffic, it isn't going to be rocket science.
I simply don't believe that one hits a sharp cutoff, independent of memory settings. There's got to be shape there, to be found only by mapping exactly how much memory has to be relaxed to slow down enough to continue, stepping up the FSB a few clicks at a time.
The 10x multiplier of the Q6700 does start to look good here, it will allow 3.6 Ghz with a FSB of 360. The prices are no longer 2:1, I'll switch if I have to.