ATI, Nvidia is not a good example. ATI for example when moving from 150nm to 130nm did it with a smaller and less complex core. They didn't want to deal with the double whammy of a new design AND a new process.
In AMD's case, the Winchester is functionally equivalent (clock for clock) with comparable 130nm parts. In addition, as we can see, people are clocking the Winchester higher than the highest released processors at 130nm. Perhaps it is simply a matter of capacity and economics? Since they sell a lot more low-end parts then high end parts, and since their cost per die is lowest with 90nm, they want to dedicate as much capacity as possible to the parts that sell the most. They technically could produce 4000+ at 90nm but its more cost effective to use the limited 90nm capacity for high volume parts, especially if there is headroom left at 130nm. Since the price of a high-end part is relatively constant over time (4000+ will cost what 3800+ used to cost, etc.), it makes the most economical sense to ONLY release faster processors in order to keep ahead of Intel. If they just decided to come out with a 4500+ they would have to lower prices on 4000+ and that would eat into their margins. Since Intel messed up, they have the opportunty to hold their aces and rake in the cash. However, since the core can do what the core can do, tech savy consumers such as ourselves can take advantage of the untapped potential of AMD's processors by overclock. I hope that this is the case and that their are no hidden caveats (electromigration or something worse).