I thought it was between 62 and 65C, but if Intel says 71C, then that's the maximum acceptable TCase value, alright. It may be that the 62/65C spec (if I'm correct) is the temperature at which the processor will throttled as enabled in the BIOS. Or it may be that I'm wrong, and it throttles at this 71C spec.
The thing to remember is that less heat is better even as you choose that higher over-clocking is "better." The other thing to remember is that you can expect room-ambient temperature to raise both TCase and the TJunction core temperatures degree-for-degree. So if you plan to run your computer at room temperature of 80F or less, and if TCase can be held below 70C, then you won't have to complicate your over-clocking with throttling enabled in BIOS.
Since temperatures will increase (assuming constant room-ambient) by the square of the voltage and linearly with CPU_FSB speed, I hold the view that there is some point in a parabolic or increasing-exponential schedule of temperatures where you need to limit your VCore increases especially, and your FSB (Mhz) setting generally.
Personally, I choose to limit my over-clocking to a VCore point where the core temperatures have increased no more than maybe 10C -- and in fact, I feel more comfortable if I can over-clock with little perceptible increase in temperature at all. to my recollection, at room-ambients above 70F, the load TCase temperature on my Q6600 is just below 40C with the stock setting. Right now, the load value is around 50C, and in the summer, it can get as high as 55C with the same OC settings. This means that my "core" or TJunction values are ranging between 60 and 65+C depending on room-ambient.
YOU'RE the one who has to decide how much risk or uncertainty you will face with the prospects of frequent processor or motherboard purchases, but then, if the normal processor lifespan is 10 years, and you're going to replace it in no more than two or three years anyway, you're the one who has to choose the parameters you're willing to live with.
This is really all about the industrial application of statistics. If Intel chooses to print "1.35V maximum" on the retail box, they must, as a business, be pretty sure that RMA's under warranty will be nothing or next to nothing. That's their view of the failure rate under a schedule of voltages in their testing labs. Nobody can tell you (except Intel -- and they probably won't anyway) whether a 5% increase in voltage above that level means a 0% or 1% or 2% or a 5% or a 10% or a 20% increase in failure over one year, two years, etc. due to electron-migration and regardless of how well you cool the system. Similarly, there is some increasing schedule of risk when you run the system constantly at temperatures approaching the 71C spec -- increasing with each degree closer to the spec, and certainly, with each degree beyond it.
We count on two things: that Intel's specs are based on their desire to reduce costs and customer-relations troubles, and that we're going to get a processor that is either "average" in relation to these schedules of risks, temperatures and voltages, or that we'll "get lucky" and obtain a processor at lower price performing at the level of their "flagship" models, yet was mistakenly binned for the model which reflects the lower price.
As geeks, and generally as a society, we need to accommodate ourselves, first -- to the idea that random variation IS a form of order in Nature, and second, that uncertainty, probability and risk pervade every aspect of life, and even those aspects which derive from careful engineering and well-controlled production processes.