whats the need to shrink transistor size to such extent?

paadness

Member
May 24, 2005
178
0
0
Im thinking for a few days why its not possible to make a CPU core very large compared to the current ones?

I mean to say, what if we don't shrink transistors and keep adding them whatever thier size is and make a larger core? Shrinking transistors is causing alot of heat problem and CPU's are more sensitive to voltages.

So why is that the companies are packing more punch in the same area. Smaller size, thats rubbish to us end users.

 

MobiusPizza

Platinum Member
Apr 23, 2004
2,001
0
0
No. Shirnking process can reduce both heat and gain higher yields. Shrinking from 130nm to 90nm produces 75% more yields because more cores can fit in a wafer.

Smaller process requires less voltage to run hence the lower heat dissipation. However, leakage needs to be controlled or the chip would have poor performance and may be just as hot as the larger process.

Also smaller transistors switch faster hence the higher frequency they can achieve. You can compare the overclockerbility of a 130nm processor and a 90nm one.

As far as I know die size have a limit on how large they can be. Shrinking the process can pack more transistors in without going over the limit hence higher performance. I don't know what the limit is and why there's a limit. Maybe a large die which have many transistor would have a low yield or low performance. Someone who's expert on this can answer.
 

paadness

Member
May 24, 2005
178
0
0
It is true, less voltage, less heat but its not about lower voltage, its about higher density of transistors oer unit area.

75 % more yield means alot of savings, i don't however any reduction in prices.

Its almost ridiculous when come to overclocking. Most motherboards can allow u to take a 3.0 Ghz P4 to 3.8 on air cooling. With that in mind, we save hell alot. So whats the point, i dont understand.

Who does Intel or AMD target with their $ 1000 CPU's even though we can achieve similar performance with $ 300 CPU.

Again, ill stress more on my question of larger cores. Now Intel going for 65nm process, soon they'll reach the wavelength of light and thats the end for single core.

The biggest advantage is obvious. Compare the last 10 years, same processor size yet exponential increase in performance.

Still, in todays time, ain't it possible to make a huge CPU core, regardless of yeild issues, possible or not??
 

MobiusPizza

Platinum Member
Apr 23, 2004
2,001
0
0
Originally posted by: paadness
Still, in todays time, ain't it possible to make a huge CPU core, regardless of yeild issues, possible or not??

I think it should be possible. But it would also mean exponential increase in price. if a wafer is used to make say only 10 CPUs of 32 cores each. They would be powerful, but they might need new motherboard design, new cooling design. The fact that such large processor would have high rate of defects and failing. Even if 100% yield is achieved, 1 wafer making only 10 CPUs say would be ultra expensive. (10 times or more the price of a single core CPU)

The technological advances I read earlier such as creating redundant electrical contacts may increase yields and we may see a solution.

However as you can see now the huge dual cores CPU are priced twice as much as the single core offering; because they occupies twice area of the wafer. So a CPU which is 4 times as big, quad-cores, will be 4 times as expensive IF there is no advances in process or wafer size or manufacturing techniques
 

MobiusPizza

Platinum Member
Apr 23, 2004
2,001
0
0
Originally posted by: paadness
It is true, less voltage, less heat but its not about lower voltage, its about higher density of transistors oer unit area.

Yeah, more transistors per unit area means higher performing processors.

Originally posted by: paadness
Its almost ridiculous when come to overclocking. Most motherboards can allow u to take a 3.0 Ghz P4 to 3.8 on air cooling. With that in mind, we save hell alot. So whats the point, i dont understand.

Well I don't get your point. Good overclockability is good for BOTH the CPU manufacturers and us. They can raise the frequency of their CPU so to sell them at a higher price point and be competitive with rival companies. Also we get to be able to overclock the remaining headroom. We both benefits with the shrink in process.


Originally posted by: paadness
Who does Intel or AMD target with their $ 1000 CPU's even though we can achieve similar performance with $ 300 CPU.

No. Dual cores have twice preformance potential. Seriously in future when all applications take advantage of multi-processor cores we will indeed see twice the performance increase.


 

bobsmith1492

Diamond Member
Feb 21, 2004
3,875
3
81
The main problem, though, is the time it takes for electrons to move across the chip from stage to stage.
 

icarus4586

Senior member
Jun 10, 2004
219
0
0
Shrinking transistors is causing alot of heat problem and CPU's are more sensitive to voltages.
There are different reasons for microprocessors giving off heat. The main problem that is exacerbated by making transistors smaller is leakage current. I read that in today's 90nm CPUs, transistor leakage is responsible for something like 25% of the energy dissipation of a CPU, but in a 130nm CPU, it's more like 10%, and once you go up to the 180nm CPUs of a few years ago, transistor leakage isn't a problem at all. There are other reasons, though that CPUs give off heat.
So let's say you make a Pentium D on a 180nm process instead of a 90nm one. No more transistor leakage worries! But the thing's so damn huge, other reasons for energy dissipation (just the clock current running through the chip, transistors switching, etc) become enormous. Just a rough guess, but something like that would use a couple hundred watts.
Also, like bob said, signal delay would also become a significant problem with a die this large. The CPU would not be able to reach 3GHz range clock speeds.
 

Gibsons

Lifer
Aug 14, 2001
12,530
35
91
Let me see if I've learned anything from reading pm (and others) posts.

You take a sheet of silicon and it has 10 defects in random places over it. If you're making 50 cpu's on that sheet, 10 will have fatal flaws, so your yield is 40 cpus. Shrink the process down so that you can get 100 on the she. You still lose 10, so your yield is 90 cpus. More than twice as many as you were getting before. That leads to profit.
 

stevty2889

Diamond Member
Dec 13, 2003
7,036
8
81
Originally posted by: Gibsons
Let me see if I've learned anything from reading pm (and others) posts.

You take a sheet of silicon and it has 10 defects in random places over it. If you're making 50 cpu's on that sheet, 10 will have fatal flaws, so your yield is 40 cpus. Shrink the process down so that you can get 100 on the she. You still lose 10, so your yield is 90 cpus. More than twice as many as you were getting before. That leads to profit.

but also as your die get smaller, smaller defects have greater impact, so you might lose 15-20 instead of 10
 

RaynorWolfcastle

Diamond Member
Feb 8, 2001
8,968
16
81
Defects and die area are two key factors in CPU manufacturability. Gibsons' statement is correct if you assumed that getting the CPU on the smaller process was just a "dumb shrink"; by that, I mean that the features are all shrunk down with no other changes made (it is in fact a misnomer because I don't think it's even possible on modern processes to do such a thing anymore). By doing this, you effectively reduce the die area significantly and cut your costs very significantly. If we were to assume as a very rough estimate that features all shrink down proportionately to the wavelength then going from 130nm to 90nm on a die that was originally 10mm by 10mm (100 mm^2) you would now have a CPU that's just about 48mm^2 and you could fit twice as many CPUs in the same wafer area!

However what usually happens is that the CPU manufacturer will choose to add features to the core when they perform the die shrink, while simultaneously being able to increase the operating frequency thanks to the smaller transistors. So if your CPU was 100mm^2 ran at 2 GHz and had a 512kB L2 cache, after the die shrink the manufacturer might be able to get you 75mm^2 CPU die with 1 MB L2 cache running at 2.3 GHz (alternately, the manufacturer could put two cores processor in the same area!).. Thus you get better performance and the manufacturer cuts the production cost This is essentially why moving to more advanced, smaller processes benefits everyone. In reality things are much more complex than that, R&D cost for a new process is becoming astronomical, yields will likely not be as good on a new process than a mature one, etc.

The main downsides to the shrinks are twofold:
A) You increase power density
B) You increase leakage

In the past neither of these were really significant: leakage might go from 1% of total power consumption to 2%, no big deal. Power density might increase a bit so you'd just throw a slightly more efficient heatsink on the chip and everything would work smoothly. With modern processors however, these are becoming significant issues as heatsinks are becoming bigger, bulkier, and more sophisticated to deal with high power densities. The problem is exacerbated by the fact that modern processes are becoming so small that quantum effects become significant and essentially insulators aren't really "good" insulators anymore. Also, when you toss hundreds of millions of transistors on a chip, even a tiny amount of leakage from each transistor becomes a problem; to give you an idea if you have 200 million transistors at 1.5V and each leaks 200 nA at all times then you end up wasting 60W. This 60W does no useful work and must be dissipated by the heatsink! Moving forward, keeping this leakage in check through new, and increasingly complex mechanisms (SOI, strained silicon, low-K dielectrics, etc) is a major challenge that the industry faces.

I'm sure pm and CTho could add to (and correct) anything I've said to give you a better idea of why else die shrinks are desirable since they actually work in this field :).
 

interchange

Diamond Member
Oct 10, 1999
8,023
2,875
136
You guys are also forgetting the massive problem of interconnect.

We are having a huge problem even clocking modern processors.

You shouldn't say the process is shrinking transistor size. The process is shrinking feature size, which in turn shrinks transistors. It use to be that transistor switching was the bottleneck in silicon, where it didn't matter how far apart you put two transistors on the same die, the interconnect latency was trivial compared to the switching of the transistors. Since that is no longer the case, just adding bunches and bunches of logic without scaling down won't afford you much. Neither will adding massive caches, etc. This is why we have multiple levels of cache, etc. You need to balance out latencies in order to make an effective processor.

Unfortunately, this also puts in a lot of other problems because of interfering EM fields generating cross-talk, hot spots, etc.

In short, processor design is no easy task :) :).
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
I mean to say, what if we don't shrink transistors and keep adding them whatever thier size is and make a larger core? Shrinking transistors is causing alot of heat problem and CPU's are more sensitive to voltages.
The biggest problem with this idea is a term that I might be mispelling called the "reticle limit". It's essentially the limit of how big a chip you can make using the optical lensing system on a given stepper generation. On the current 90nm process technology, it's approximiately 30mmx30mm. I can usually find Google links to back up my assertions, but I'm not finding anything that mentions reticle size limits on Google... which leads me to wonder if I'm mispelling it (retical?).

In any case, whether Google is going to back me up on this or not, there is a limit to the size that you can make a chip before "edge effects" from the lens system cause errors. It has been gradually increasing from one process generation to the next.

That said, there are plenty of reasons why there aren't a lot of 30mmx30mm chips being made - in fact there are only a handful of chips being made at sizes even close to this limit. Those reasons have all been mentioned, but foremost among these is yield.

Who does Intel or AMD target with their $ 1000 CPU's even though we can achieve similar performance with $ 300 CPU.
Usually businesses. There are plenty of business uses where errors are absolutely intolerable (banking, trading systems, currency exchanges), where any downtime at all results in a substantial loss in revenue (think: Ebay, Amazon, also Visa), and there are plenty of business applications where applications run over long periods of time and small percentage increases in calculations result in significantly higher productivity (Pixar's render farms).

Your point is that overclocking achieves similar results to those of buying a more expensive CPU, but this is not actually true. Manufacturer's test parts across a broad spectrum of temperature, voltage and frequency to determine the operating points that they are labelled with. There a myth that is infrequently mentioned on online forums that all CPU's of a stepping/family are essentially the same and that manufacturers just stamp them with whatever speed they need at the time. Speaking as someone who spends a good deal of time looking at large statistical charts of CPU speeds, I can say that this is most definitely untrue. Given that there is wealth of papers and books on the subject, I'm not going to go into why exactly this is - I can, however, cite some books that discuss the issues at a broad level should you want to look into this further.

Since there's a range of speed of parts coming out of the fab, there really is a physical reason for why one part is stamped "3.8GHz" and another "3.0GHz". You may be able to achieve the frequency of a 3.8GHz with a 3.0GHz part, but you will do this at some cost - either in operating margin, or in long-term reliability. It's your microprocessor, you can do what you want with it.
This leaves the question as to who is the master interms of product yeilds?
I doubt if we got the head of the manufacturing departments from AMD, IBM, Intel, National Semiconductor, NEC, Samsung, Texas Instruments, TSMC, UMC and any of the other large-scale manufacturers all in the same room and asked them this question you could get any consensus at all. Every company tries to maximize it's yield - it's one of the keys to reducing manufacturing costs (which directly affect profit and competitiveness) - and every company usually hides this data since it can tell a competitor exactly what it costs to manufacturer a part and thus what profit margins a competitor has on a part which has a large strategic advantage in pricing.

Patrick Mahoney
Microprocessor Design Engineer
Intel Corp.
 

paadness

Member
May 24, 2005
178
0
0
thank you, pm. i appreciate ur words.

lastly if u are reading this thread, what is the maximum possible shrinkage before Intel or any other firm goes deep in chemistry to break up atoms?
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
Originally posted by: paadness
lastly if u are reading this thread, what is the maximum possible shrinkage before Intel or any other firm goes deep in chemistry to break up atoms?
Throughout the last 30 years a lot of people have tried to predict the end of CMOS scaling, only to watch their predicted "limit" get passed by the industry. Rather than throw my guess out, I'll just reply, "I don't know".

I think there is a fairly clear road to approximately 10nm - particularly factoring material shifts to other materials for the gate contact, the gate insulator, and the channel, as well as allowing for alternative FET topologies away from the self-align FET. Beyond, 10nm things the scaling road becomes very difficult to predict. The shift to 10nm technology, if things occur as predicted, should occur sometime around 2014.
 

OrganizedChaos

Diamond Member
Apr 21, 2002
4,524
0
0
i'm not one of the smart people but i think its because the speed of light isn't fast enough. if the chip is to large it takes to long for a for a signal to get from A to B and its gets interupted by the next clock cycle

//99 percent chance thats wrong
 

Eskimo

Member
Jun 18, 2000
134
0
0
Good post as always pm, I can see i'm not needed in this thread ;).

Originally posted by: pm

The biggest problem with this idea is a term that I might be mispelling called the "reticle limit". It's essentially the limit of how big a chip you can make using the optical lensing system on a given stepper generation. On the current 90nm process technology, it's approximiately 30mmx30mm. I can usually find Google links to back up my assertions, but I'm not finding anything that mentions reticle size limits on Google... which leads me to wonder if I'm mispelling it (retical?).

You are spelling it right. As someone who used to work for a mask shop i can confirm your limits and it's not entirely due to the process technology but rather the reticle creation technology. Current high end stepper/scanner lithography systems are all 4X reduction systems. This means that the features printed on the reticle are drawn 4 times the size of the intended feature. The standard for reticle substrate sizes is a 6" square quartz plate. 6" is ~150mm in size. At 4x that gives you about a 30mm field size at the wafer level. 4*30mm is 120mm of your total plate and as you get to the edge of the reticle just like the edge of a wafer you suffer from non-uniformity issue. Unfortunately for reticles rather than have die that won't yield at the edge of the wafer the edge of your reticle is the edge of a die and if that side of the die won't print with the same properties as the other side or center you have some major device issues when you go to use it.


Those reasons have all been mentioned, but foremost among these is yield.

Agreed, even the most simple models have yield decrease exponentially as area increases.

I doubt if we got the head of the manufacturing departments from AMD, IBM, Intel, National Semiconductor, NEC, Samsung, Texas Instruments, TSMC, UMC and any of the other large-scale manufacturers all in the same room and asked them this question you could get any consensus at all. Every company tries to maximize it's yield - it's one of the keys to reducing manufacturing costs (which directly affect profit and competitiveness) - and every company usually hides this data since it can tell a competitor exactly what it costs to manufacturer a part and thus what profit margins a competitor has on a part which has a large strategic advantage in pricing.

Not to mention there is a wide diversity in the products produced by each of those companies. Design can play a major role in functional yield and even susceptibility to random defectivity. Probably the best comparison can be made between DRAM manufacturers since our parts are much more similar to eachother than logic designs. But then again every company is using a proprietary process to make those parts and just because we can get good yields on a particular process does not mean that Micron would be able to without a lot of our engineering knowledge, and vice versa.

The simple answer to the original post is one word: economics. It's money that drives every company to do what they do.

In my business manufacturing a commodity part which sells for 1% or less than the parts that pm makes it's all about volume and cost. The only way we can make a profit is to push as much silicon out as quickly and cheaply as possible while maintaining the quality required by our customers. The secret weapon of DRAM lies in our redundancy. We don't have to make perfect parts to make a perfectly functioning device.

In pm's case economics dictates they must continue to push the speed/feature envelope in order to deliver a product that will drive sales and maintain/take away marketshare. They have a relatively fixed volume they need to satisfy and are more concerned about yielding enough high speed fully functional parts. They have little room for error and must operate under extremely tight and advanced process controls.

All of use profit from being able to stuff more chips onto the surface of the wafer by shrinking their size. Processing a wafer is a relatively fixed cost, the more die you can produce on that wafer the more profit you are going to realize. Again this is more of an issue for us making $4 chips than Intel and AMD making $400 chips but don't think for one minute their design teams weren't given some sort of guidelines for floorplan limitations by the bean counters.

 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
There a myth that is infrequently mentioned on online forums that all CPU's of a stepping/family are essentially the same and that manufacturers just stamp them with whatever speed they need at the time. Speaking as someone who spends a good deal of time looking at large statistical charts of CPU speeds, I can say that this is most definitely untrue.

Good to hear that from what I'd consier an "authoritative" source ;). That should go in the Anandtech FAQs.