Instead of voltage and heat, wouldn't it technically be current and temperature causing the damage?
Understand that there is a manifold of degredation mechanisms occuring in every IC. Out of the hundreds of ways that exist for an IC to degrade all that matters (to the end user) is the one that does it first.
Electromigration, void formation, dielectric breakdown, hot carriers, etc...each is a physically unique phenomenon involving special conditions (geometry and species dependent) and each is dependent on temperature, voltage, and current in accordance with physics.
Apply no voltage to the IC but heat it up and you activate all purely thermal mechanisms,
including diffusion. The dopants move around such that they change concentrations, and when those dopants move around in the channel region of the transistor they degrade the electrical characterstics of the xtor.
Apply no heat (take it down to near absolute zero) but apply a voltage and no current (i.e. place the IC into an electromagnetic field) and you will activate ions within the dielectrics which become charge carriers - the same phenomenon that we refer to as
fast-ion conductors.
These charge carriers reduce the insulating properties of the dielectric materials within the IC (between the wires as well as within the xtor itself) and the result is a degredation in the leakage properties of the device.
Apply a voltage and allow current to flow and now you have electrons flowing through both the leaky dielectrics material as well as the intended paths of conduction - both of which are permanently changed over time from the electrons literally knocking atoms out of place and enabling them to overcome their thermally-limited activation barriers.
This results in void formation within copper lines, and leaky dielectric materials becoming so leaky that they start generating lots and lots of heat (
joule heating) and catastrophic degredation sets in rather quickly.
So you see your CPU can die from basically any source of energy - be it thermal or electromagnetic - it is just a question of how quickly and from how much.
It is an industry standard to design process nodes such that they entail an intrinsic lifetime reliability capability of 10yrs at the extreme corners of the IC's operating specs (max temp and max voltage). But it is not a law and it is not written in stone, so if a company wanted to cut corners and just take the risk of having a bunch of in-field fails happening then they could certainly build less robust IC's. I know of no cases where this has happened though, no one wants the financial liability (it would be a penny-wise/pound-foolish move on behalf of whoever did it, so basically no one does or will).
With today's TJmax thermally limited CPUs it is practically impossible for you to shortchange your CPU's lifespan by running it too hot. You have to increase the voltage to get into a regime in which the lifespan is being signficantly reduced. Once you get there though, temperature definitely makes a bad situation even worse (even if you are below TJmax).
Pumping 1.5V through my Sandy Bridge while hitting 95°C at 5GHz in LinX was bad on both accounts but I expect it to last at least 2 yrs under those conditions and I really don't care if it dies after that (or degrades to the point of not sustaining the OC).