for any given processor, there is *no* pre-planned difference between the various clock speed versions of the chip. Chips are made on big silicon wafers using a process called lithography. (Lithography works on the same basic principle as your normal Xerox copy machine.) There are very many chips on each wafer., and there are many things that can go wrong in the lithography process. Each processor contains millions and millions of transistors -- 55 million for the P4 for example. These are microscopic, and there is no way to get *every* transistor created exactly the way the designers want, simply due to the complexity of the task.
So say we look at a group of 9 P4 chips that have just been produced, taken from the center of a wafer. 1 of them might have no flaws, and it will get rated and sold at the highest speed. 3 of them might have some flaws that keep them from running at the highest speed -- they will be sold at lower clock speed ratings. And 3 others have flaws in the cache.... Intel would simply disable half of the cache (the part with the flaws obviously), and sell those chips as Celerons. And the other 2 chips had so many flaws that they could not run. So the 3.06GHz P4 that Intel sells for $500 or whatever it is, was probably sitting on a wafer right next to two failed processors, a 2.4, a 2.6, a 2.0, and 3 Celerons. There is no purposeful distinction between these chips -- they are all designed to be exactly the same, but due to flaws inherent to the process, some of them will not come out as designed.
This is what manufacturers refer to as yield. Yield is pretty high for Intel and AMD but specialty chips like very high-end graphics chips might have yields lower than 50%. So half of the chips on each wafer are thrown away because they don't work. Manufacturers can increase yield by coming out with new masks and lenses for the lithographic process, better environmental controls (reducing the size of random particles floating around in the fab), etc.
The process in which the newly made chips are tested and rated for a certain clock speed is known as binning. A manufacturer may sell a chip that passed the test at a high clock speed, at a lower clock speed. E.g., Intel had very good yield when they first started with the Northwood P4 core, and they obviously couldn't sell *all* of their produced chips at the highest speed rating -- there simply isn't that much demand for the high end of the market. So Intel sold a lot of very good chips at lower ratings than they should have been given. Hence the obsession with 1.6GHz chips overclocking to 2.4GHz.
So those 1.6's cost Intel just as much to produce as the 2.4's. So why didn't they sell them as 2.4's, or even as 2.0's? Because, as I said above, there's just not as much demand for a $500 part as there is for a $150 part. Intel would rather get $150 for a chip than $0 for one that nobody's willing to buy. So there are complicated pricing schemes and whatnot. But, to answer your question -- inherently, there is no difference between a higher and a lower clocked version of the same chip. (Clarification: there are always new revisions and new layouts -- a Rev A chip *is* inherently different from a Rev B chip, but within a revision there is no difference between the higher and lower clocked versions.)
edit: also, manufacturers are often conservative in their binning so there's usually some headroom for overclocking.