I mean to say, what if we don't shrink transistors and keep adding them whatever thier size is and make a larger core? Shrinking transistors is causing alot of heat problem and CPU's are more sensitive to voltages.
The biggest problem with this idea is a term that I might be mispelling called the "reticle limit". It's essentially the limit of how big a chip you can make using the optical lensing system on a given stepper generation. On the current 90nm process technology, it's approximiately 30mmx30mm. I can usually find Google links to back up my assertions, but I'm not finding anything that mentions reticle size limits on Google... which leads me to wonder if I'm mispelling it (retical?).
In any case, whether Google is going to back me up on this or not, there is a limit to the size that you can make a chip before "edge effects" from the lens system cause errors. It has been gradually increasing from one process generation to the next.
That said, there are plenty of reasons why there aren't a lot of 30mmx30mm chips being made - in fact there are only a handful of chips being made at sizes even close to this limit. Those reasons have all been mentioned, but foremost among these is yield.
Who does Intel or AMD target with their $ 1000 CPU's even though we can achieve similar performance with $ 300 CPU.
Usually businesses. There are plenty of business uses where errors are absolutely intolerable (banking, trading systems, currency exchanges), where any downtime at all results in a substantial loss in revenue (think: Ebay, Amazon, also Visa), and there are plenty of business applications where applications run over long periods of time and small percentage increases in calculations result in significantly higher productivity (Pixar's render farms).
Your point is that overclocking achieves similar results to those of buying a more expensive CPU, but this is not actually true. Manufacturer's test parts across a broad spectrum of temperature, voltage and frequency to determine the operating points that they are labelled with. There a myth that is infrequently mentioned on online forums that all CPU's of a stepping/family are essentially the same and that manufacturers just stamp them with whatever speed they need at the time. Speaking as someone who spends a good deal of time looking at large statistical charts of CPU speeds, I can say that this is most definitely untrue. Given that there is wealth of papers and books on the subject, I'm not going to go into why exactly this is - I can, however, cite some books that discuss the issues at a broad level should you want to look into this further.
Since there's a range of speed of parts coming out of the fab, there really is a physical reason for why one part is stamped "3.8GHz" and another "3.0GHz". You may be able to achieve the frequency of a 3.8GHz with a 3.0GHz part, but you will do this at some cost - either in operating margin, or in long-term reliability. It's your microprocessor, you can do what you want with it.
This leaves the question as to who is the master interms of product yeilds?
I doubt if we got the head of the manufacturing departments from AMD, IBM, Intel, National Semiconductor, NEC, Samsung, Texas Instruments, TSMC, UMC and any of the other large-scale manufacturers all in the same room and asked them this question you could get any consensus at all. Every company tries to maximize it's yield - it's one of the keys to reducing manufacturing costs (which directly affect profit and competitiveness) - and every company usually hides this data since it can tell a competitor exactly what it costs to manufacturer a part and thus what profit margins a competitor has on a part which has a large strategic advantage in pricing.
Patrick Mahoney
Microprocessor Design Engineer
Intel Corp.