Why not make CPU die sizes bigger?

OneOfTheseDays

Diamond Member
Jan 15, 2000
7,052
0
0
Ok I understand the need for transistors to get smaller and smaller so that we can pack more and more into a chip, but why don't we just make larger CPU die's so that we can put more transistors on it? I mean processor's are already getting really damn small, but would it matter if we made the die's a lot bigger so that we could fit more transistors? Are there certain technological problems in doing so?
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
You want smaller structures because it lowers manufacturing cost, and because larger structures don't allow high operating frequencies. Also, leakage problems aside, smaller structures run on less power.

Besides, plurals don't have apostrophes. Thank you.
 

OneOfTheseDays

Diamond Member
Jan 15, 2000
7,052
0
0
haha, thank you grammar police. yea i figured there would be many many good reasons why we keep going smaller and smaller with each generation of CPUs.
 

Bona Fide

Banned
Jun 21, 2005
1,901
0
0
Plus, to my knowledge, larger die = larger heat output and power draw. That's why Intel is moving to the 65nm and 40nm chips. AMD's new fab should be working on the same.
 

icarus4586

Senior member
Jun 10, 2004
219
0
0
Lots of reasons, most have already been posted.
As clockspeed increases, the time that it takes for an electrical signal to get from one part of the die to another actually becomes perceptible. This limits potential clockspeed.
AMD's Newcastle core (just an example) has a die size of 144mm^2. If you could spread all those transistors out without changing feature size, clock speed, etc, keeping it at the same power draw, the core would be easier to cool, since it has a larger surface area for heat transfer. However, if you took that die and added a bunch of good stuff (like an onboard ISA controller and maybe like an EDO DRAM controller, sweet stuff like that... jk) it would increase the heat output, power requirements, etc. It would get harder to cool. That's one of the reasons that Prescotts run hotter than Northwoods. Neglecting the fact that they use more power, even a decrease in die size with the same power consumption makes it harder to cool. Less surface area and whatnot.
And probably the biggest thing... The smaller the die, the more can be fit on the original wafer, and more money is made.
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Originally posted by: Cattlegod
speed of light is finite.
low yield the larger you go on die size.

The problems in the silicon lattice are not as aparent at higher feature size. However, you won't like to have your cell phone twice the size it has now (or maybe more). Smaller size helps in more ways than one
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Also, at 1 gigahertz, the wavelength of a signal is just about one feet. What that means? You can have a stationary alternative current (1 GHz frequency) on a one feet wire, that has max voltage on both ends but has 0 voltage in the middle. You can't have any frequency if your wires are long
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
There was a really good discussion in this thread:

http://forums.anandtech.com/messageview.aspx?catid=50&threadid=1613541

Quoting from my answer in that thread:
I mean to say, what if we don't shrink transistors and keep adding them whatever thier size is and make a larger core? Shrinking transistors is causing alot of heat problem and CPU's are more sensitive to voltages.

The biggest problem with this idea is a term that I might be mispelling called the "reticle limit". It's essentially the limit of how big a chip you can make using the optical lensing system on a given stepper generation. On the current 90nm process technology, it's approximiately 30mmx30mm. I can usually find Google links to back up my assertions, but I'm not finding anything that mentions reticle size limits on Google... which leads me to wonder if I'm mispelling it (retical?).

In any case, whether Google is going to back me up on this or not, there is a limit to the size that you can make a chip before "edge effects" from the lens system cause errors. It has been gradually increasing from one process generation to the next.

That said, there are plenty of reasons why there aren't a lot of 30mmx30mm chips being made - in fact there are only a handful of chips being made at sizes even close to this limit. Those reasons have all been mentioned, but foremost among these is yield.

and then later on in that thread, Eskimo added in reply to the previous paragraphs:

You are spelling it right. As someone who used to work for a mask shop i can confirm your limits and it's not entirely due to the process technology but rather the reticle creation technology. Current high end stepper/scanner lithography systems are all 4X reduction systems. This means that the features printed on the reticle are drawn 4 times the size of the intended feature. The standard for reticle substrate sizes is a 6" square quartz plate. 6" is ~150mm in size. At 4x that gives you about a 30mm field size at the wafer level. 4*30mm is 120mm of your total plate and as you get to the edge of the reticle just like the edge of a wafer you suffer from non-uniformity issue. Unfortunately for reticles rather than have die that won't yield at the edge of the wafer the edge of your reticle is the edge of a die and if that side of the die won't print with the same properties as the other side or center you have some major device issues when you go to use it.



Patrick Mahoney
Senior Design Engineer
Enterprise Processor Division
Intel Corp.
Fort Collins, CO

 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
I just wanted to add that using 1x magnification reticles, the die size that could be created for a given processor would be four time as big (but the number of transistors would be just the same). However, I don't think anybody would build a bigger processor with exact the same features (expecially considering that a smaller process allowed for a higher working frequency)
 

OneOfTheseDays

Diamond Member
Jan 15, 2000
7,052
0
0
well sooner or later we aren't going to be able to shrink transistors any further. what are we going to do then?
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Originally posted by: Sudheer Anne
well sooner or later we aren't going to be able to shrink transistors any further. what are we going to do then?

If that day comes, then the question will be, what do you do with 10 billion transistors anyways? If you want to increase die size, what would you do better if you had 50 billion transistors?
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
well sooner or later we aren't going to be able to shrink transistors any further. what are we going to do then?
One idea: stack the transistors vertically. One more layer of FETs doubles your transistor count, and then two more layers does it again. If you add a layer of wiring in between the two stacks you could put nFETs on the bottom and pFETs on the top and dramatically cut the die size by eliminating well spacing. Cooling then becomes a problem, but since we are talking hypothetically, you could micromachine channels in and then liquid cool them... IBM's has done some work with this already.

Another idea is to paradigm shift to something like carbon nanotube devices or spin-based transistors. Who knows, maybe you can make a storage element out of quarks.

Or, we find out that we have reached the limit, and then instead we do the "Right Hand Turn" that Intel took with multi-cores, and, this is pushing the imagination a bit, imagine a world where there are so many computers around, than you don't need one single big computer chip any more... just something that can take a task, break it up and send it off to all of the other computers nearby wirelessly. Thus enabling you to carry around something small, and yet have a bunch of computers that are scattered all over the house (or the world) do the computing for you. So computers stop getting smaller... but you can still carry around small devices because it communicates with so many others nearby.
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Thus enabling you to carry around something small, and yet have a bunch of computers that are scattered all over the house (or the world) do the computing for you
Some kind of wireless remote control (or mobile video terminal) for a bigger computer would be extraordinary... You would have lots of computing power at home, and you would access it via something maybe as small as a stack of 100 A4 papers (or Legal or whatever). Voice recognition, handwriting recognition, output for a bigger display, input for keyboard and mouse and so on.
I would like one myself
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: pmAnother idea is to paradigm shift to something like carbon nanotube devices or spin-based transistors. Who knows, maybe you can make a storage element out of quarks.
Take a look at this paper... in about 10 years we may start to run into limits inherent to the switching model used - any sort of binary switching device, whether it's a regular silicon FET or something using carbon nanotubes, will have this problem.
 

OneOfTheseDays

Diamond Member
Jan 15, 2000
7,052
0
0
i suppose when we reach that point we will have to ask ourselves if there is anything out there that we need that requires that kind of power.
 

xxXXDeathXXxx

Junior Member
Jun 26, 2005
10
0
0
Random error frequency due to reactions from nucleon collisions with the circuit itself goes up with increasing cross sectional area. Dealing with that problem will cause costs to go up with circuit complexity.

PS I thank the circuit characteristics at 60K feet thread for this even being on my mind as I answer now. Being an ultra lowlander I had not really even considered it much until today.

Edit: should have qualified - high energy nucleons....
 

kennychuck

Junior Member
Aug 25, 2005
5
0
0
Originally posted by: pm
Another idea is to paradigm shift to something like carbon nanotube devices or spin-based transistors. Who knows, maybe you can make a storage element out of quarks.

The University of Florida is doing research into so-called "spin-tronics" currently which would use the spin of an electron to store information. They're studying zinc-oxides in particular. If it works it has a lot of promise.
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
Originally posted by: Fox5
Itanium has an incredibly large die size.
Yes, this is the microprocessor that I work on. The die is approximately 28mm x 22 mm. Two dual-threaded cores with 24MB of cache - all on one die.
 

Future Shock

Senior member
Aug 28, 2005
968
0
0
One of the reasons that large dies are bad is simple economics - you lose a greater area of the original silicon wafer each time you get a defective chip. Since defects are a fact of life in semiconductor production, the amount of wafer you waste can be a factor in your cost of production. How large a factor, I can't say, but I don't think that highly pure wafers are cheap...

Future Shock
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: pm
Originally posted by: Fox5
Itanium has an incredibly large die size.
Yes, this is the microprocessor that I work on. The die is approximately 28mm x 22 mm. Two dual-threaded cores with 24MB of cache - all on one die.

Yeah that thing is a monster in terms of die size(guessing it'll be in terms of performance as well for that matter ;)).
Damn die is bigger than many packages.

I'm guessing Montecito is about as big as you guys are willing to go, no?
Of course I guess the cache makes up some 75-80% of the size right? Makes it more financially sound.
 

pX

Golden Member
Feb 3, 2000
1,895
0
71
I always thought it came down to yield. The smaller the chips are made, the higher the yield per wafer will be...