Extreme edition chips are not clocked extravagantly high; they simply have the highest bin for their family's TDP and that is supposed to justify costing twice as much as the next lower bin. We're at the point now where a $200 chip can outpace any Extreme Edition without increasing its voltage.
The OP is saying that, perhaps not for $1000, but for $2000, intel should consider offering and supporting an official 170-200-watt TDP. Yes, they could easily slap some transparent, intel-branded fans on a Corsair A70-based heatsink and sell them with the limited number of chips that can do 5 GHz at 1.2v, and then ship them at 1.25v to be safe, and the amount of people dumb enough to buy such a product will be about as scarce as the number of chips that are validated for such a role. They could say it'll only work in "certified" Z77/X79 boards that are $1000/ea, only with certain power supplies, chassis, etc. IBM has sold 200-watt server CPUs (that cost way more than $2000/ea) in dense, 4S systems for years now. It is perfectly within intel's power to offer both Core and Xeon devices at this power level, so why shouldn't they?
They could've had some very wide margins with such a product and its supporting infrastructure, especially if this TDP were introduced during the 45 and 32 nm nodes, which were very generous nodes when you compare them with 90, 65, and 22 nm. The reason they won't sell products like this is because all of the frequency headroom we take for granted is going to come in handy sooner or later, when x86 finally runs out of good tocks and the VLSI gods run out of good ticks... sooner or later intel is going to be forced to go faster for better performance and that means TDPs could increase as they did in the early 2000's until we figure out a better way to build computers.