Is there a reason you disallowed copying from the PDF?
edit: See disclaimer in my sig. The potential faster switching of transistors on new process nodes doesn't necessarily mean anything. The manufacturer would probably just rate the new CPUs at a higher stock frequency. OCers are taking advantage of margin, and you want the process that the manufacturer leaves more margin on.
I'd also question whether a 2003 130nm CPU really wouldn't OC better than a 2004 130nm CPU. It'd be possible that the manufacturer has a better idea of the long-term reliability characteristics of the transistors (or has improved the long-term reliability) and can therefore leave less margin as time goes on. To elaborate, when a CPU comes from the fab it gets put through burn-in, and it's tested to find its maximum frequency. A sample of parts is operated in extreme conditions to determine the long-term characteristics of the devices and determine margins required. A margin is then applied based on expected operating conditions and degradation of the devices over the lifetime of the part. As the process matures, the devices could be made more reliable, or the manufacturer could use less-conservative margins based on longer-term testing. This is all theorizing - pm would probably know to what extent manufacturers do that kind of thing.
When it comes to "kinks" in a new process, I'd think those would show up more in a lower speed rating by the manufacturer rather than a reduced OC potential.
Regarding heat output, it's not as straightforward as "the added heat generated by the 2.2GHz part while running at 3GHz will likely be more than the added heat generated by the 2.6GHz part while running at the same speed." There are a whole bunch of complexities that may result in either one burning more power at any given speed (leakage, gate lengths / oxide thicknesses, etc). I would venture a guess and say it's more likely that the 2.2GHz part runs hotter than the 2.6GHz part nowadays but that the story would be reversed if we were talking more than a couple years ago.
The power output may not be the reason the 2.2GHz part would need better cooling than the 2.6GHz part: the reason could be that its transistors are slow, so they need to be kept cooler to operate at the same speed.
I'd argue that off-brand RAM is fine so long as you're running it within its rated specs (e.g. off-brand 800MHz RAM vs name-brand 667MHz RAM).
If an image is "Courtesy ASUS" that means they explicitly gave you permission to use it, doesn't it? Did you get permission, or did you just use it?
In the description of "CPU Voltage" I don't think you make it clear enough that you can kill your CPU or significantly shorten its lifetime by increasing the value. Out of curiosity, what CPU did you use that runs on "1.8V-2.5V" and works with motherboards that have PCIe slots? IIRC my 700MHz Athlon ran on less than that 8 years ago.
If the CPU temperature in the BIOS is anywhere near the numbers you gave, I'd be seriously worried. In the BIOS, the CPU is mostly idle, so it's consuming significantly less power than it'll consume when it's under load.
You CANNOT assume current doesn't change much with a small voltage change. Even for a simple resistor it changes linearly with voltage, so a 10% voltage increase gives a 10% current increase, and therefore a 21% power increase. With transistors it can get worse, since things like leakage increase drastically with voltage. A better (and still simplified) way to look at power is P = C * V^2 * F, or capacitance * voltage squared * frequency. This is probably the most serious error you made.
edit2: Oh, your "1.8-2.5V" was probably a copy/paste from the DDR2 voltages.