A few thoughts:
1). Promoting the competition on this forum while using it to showcase your own cooling design is a bit sketchy. That's the sort of thing for which paid advertisements are intended. It's close to spamming, especially when it's a local competition in which few (if any) of us could compete for obvious reasons.
2). Performance metrics are king in any competition like this. If maximum performance (rather than clockspeed) is the goal, then you need to be up-front about which benchmarks you will use to determine maximum performance. A quick look at any one of Anandtech's CPU or memory reviews (especially memory) should provide some ideas on what could be used for performance.
3). Your categories are, in my opinion, pretty weak. What you need, per class (Air or Water), are the following prizes:
a). Budget Overclock: Set a specific minimum performance point in all benchmarks and then have competitors reach that with their overclock. Lowest overall amount of money paid to reach that minimum performance point wins the prize.
b). Bang Per Buck: Winner achieves the best ratio of performance per dollar. Easy metric to measure provided you can agree on a total performance score based on all benchmarks used.
c). Best overall performance
d). PC Design is more a dog-and-pony show sort of thing that is highly arbitrary. Best temps may be too finicky since sensors can be such a bitch to monitor and keep consistent from one rig to the next. I would skip the temp category just on the KISS rule alone.
In addition to those points, I had a few other random thoughts:
To keep hardware sourcing fair, here's how hardware should be sourced:
Competitor chooses a platform, then cpu/memory/board on that basis. Then the event organizers, working with one or more vendors friendly to cherry-picking (Tankguys?), will provide best-out-of-10-units cpu/memory/motherboard (best-out-of-10-units kit on the RAM, rather than for each DIMM); testing to confirm the quality of each unit provided versus the other 9 units in each lot will be conducted with "known good" complementary components.
Event organizers will specify a small array of PSUs competitors will source through the event coordinators to prevent the budget buyers from using PSUs that may fail during competition or be unlikely to run past the few benchmark runs required to win the competition.
Event organizers will provide benchmark software, possibly through a Java applet or what have you so that they can control the benchmark more thoroughly. Bench will be run on a virgin install of an OS chosen by coordinators.
Alternatively, the event coordinators themselves could provide a custom Linux distro that is little more than a trojan horse that gives them complete control over the system so they can conduct benchmarking remotely without allowing the builder to interfere with it. Other OSes will be usable prior to final benchmarking to test stability and tweak settings. Non-trojaned versions of the distro would be provided to allow competitors to assess stability on the competition OS.
Any system submitted that could not pass pre-ordained stability tests would be immediately disqualified. Stability tests would need to be a suite of at least two programs such as Memtest, Prime95, Linpack, etc.
Competitors themselves will acquire all other hardware on their own time. Obviously this still means an element of luck might be involved since one competitor might get a better HSF than another despite ordering the same product from the same vendor, but variations will be less severe and modifications to improve performance/even out defects will be viable (while they are not when CPUs of different "quality" silicon are involved).
Lapping will be allowed but some sort of scheme will be necessary to ensure the competitor is using the hardware provided. IHS removal will also be allowed, as will DIMM heatspreader removal/replacement and modifications to the cooling systems on provided motherboards (again, with some kind of hardware ID scheme in every case).
Also, if you want gaming performance to be an element, you may want to have a two-heat competition in which the competitor overclocks a system in the first heat and then OCs the video card in the second heat, after which point gaming benchmarks are run. Make it a separate competition (you could provide gaming categories analogous to the non-gaming ones such as budget, bang-per-buck, best performance, so that the first heat could be the competition discussed above and the second heat would just be an attempt to OC a card on the platforms that finished high enough in the first heat, and winners in the second heat might not be the same as the winners in the first. Sorting out how to do budget in the second heat may be difficult since people could go gimpy on the vid card to win the first heat while others who blew part of their budget on a good card might be the only ones competitive in the second heat, but then that might not be a huge problem).