Microprocessors - how is it possible to design & build them

Status
Not open for further replies.

sofl

Junior Member
Aug 6, 2012
1
0
0
I'm seeking to gain more insight into microprocessors & am interested to hear how it's possible to design & manufacture them - given the following factors:

  1. According to http://en.wikipedia.org/wiki/Transistor_count the present transistor count is up to 4, 5, 6, 7, 8! billion transistors. That would mean connections would be in the many 10's of billions.
  2. I can't see how human designers could achieve this - purely from a time & organisational perspective. Just counting to a billion would take: 31 years, 251 days, 7 hours, 46 minutes, and 39 seconds (http://answers.yahoo.com/question/index?qid=20080128050552AAFUL18) never-mind all the design work that needs to take place around each one of those 8 billion counts.
  3. Then there's the actual manufacturing. Most videos show some kind of physical abrasive process being used at various stages during manufacture to grind down the top surface. I find this very difficult to believe - as would seem a very crude process - given the scale of the chip components involved.
  4. I can't follow how it would be possible to reliably create a processor with so many billions of components - 100% functional.
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
ancient-aliens.jpg
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
computers
Basically if it were not for the computers we have now we couldn't build the chips for the next generation of computers. For example when I program an FPGA chip it has millions of transistors and gates that I couldn't re-create if my life depended on it, but the software I use handles all that for me , all I have to do is lay out the functions I need.


Making the chips themselves is done with lithography and layers. Print out the layer on a clear sheet, expose to the wafer, etch away what wasn't exposed and then do the next layer. Eventually those layers form circuits and pathways.
http://en.wikipedia.org/wiki/Semiconductor_device_fabrication


rarely are chips produced 100% perfect, what they do is design in redundant areas so that if something doesn't work 100% in the final production there are options to correct it by burning a fuse or altering software. Even with how careful they are chips always have issues, I still get data sheets with corrections for problems found after the chip has been on the market for 5 years.
 
Last edited:

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
[*] I can't see how human designers could achieve this - purely from a time & organisational perspective. Just counting to a billion would take: 31 years, 251 days, 7 hours, 46 minutes, and 39 seconds (http://answers.yahoo.com/question/index?qid=20080128050552AAFUL18) never-mind all the design work that needs to take place around each one of those 8 billion counts.

It's all about obtaining efficiency. The first stab at it is to at least narrow it down to unique designs. The chips with the highest number of transistor count probably has the same design stamped numerous times (so you cut it down 4x or so for quad core processors). And then INSIDE each core there's some usage of repeated designs (cache). So once you narrow it down to how many unique designs you have, you STILL have a large pile of work ahead of you but at least you reduced it quite a bit.

So you got to start gaining efficiency inside each design. Automated synthesis flows help automatically generate logic for you based on RTL and do all the dirty work. So that's a HUUUGE boost in productivity since you can start designing and placing hundreds of thousands of transistors at a time (with opportunities to iterate!). I know Intel is a little difference since we do have critical datapaths designed by hand but even those designs you can be more efficient about it. When you need to latch 64-bits and do the same logic to all the bits, we don't draw it 64 times, we draw one bit and copy it across.

The next step... is just sheer manpower (or more innovation required) but at least it gives you an idea of how to get all the work done. The transistor count is a little scary when you first look at it but if you look at an SRAM array, there's an incredible amount of transistors in there.... but those require the least amount of layout work since blocks are highly instantiated.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
In terms of transistor count is worth mentioning that no one designs at the transistor level and if they do its to build another part that is combined in some way. For example in order to make something that can add two 32 bit numbers together we first create a unit that knows how to add two 1 bit numbers together, and then combine 32 of them to get the result we want. Then we put 3 of those down to make our integer units (well we also need multiplication, division and a host of other capabilities to complete an integer unit but the principle is sound) and that gets made as part of an overall core that then gets copied 4 times. Its surprisingly easy to get to millions/billions of CPUs when all you are doing is copying and pasting the same basic pieces all over the place on a CPU.

You could arguethat actually CPUs really use 1/64th of the budget in actual logically different pieces, because the other 63/64 is a copy and paste of the other components to make it work on 64 bit numbers instead of 1 bit numbers. These days a large part of the CPU is cache, once you have a scheme for storing one bit storing millions of them is copy and paste as well (ok its a bit more work than that but not much).

We just don't design these things from transistors, we never did. We build computers out of every more complicated units which we snap together based on their interface requirements and then let a computer work out how to actually lay the darn thing out in a reasonable way.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,672
2,546
136
I can't see how human designers could achieve this - purely from a time & organisational perspective.

No human could design a modern CPU transistor by transistor. However, that's not how it's done. You can make heavy use of abstraction, and separate the design into independent parts with clean interfaces.

For example, you are designing a CPU that needs a cache subsystem. You design the inputs and outputs, define where on the chip they will be, and what the protocol for using the cache is -- down to "these wires are the address wires, and these are the data wires. When I pull read control line up, you look up the address on the address lines, and if found, return the contents on the write lines." Then, the cache team can do their work independently, without having to bother the rest of the cpu team again. And the first thing the cache team would do is to design an interface between, and split into tag array team and data array team.

Division of labour is fundamentally how everything that's too hard for individuals is made, whether we are talking about cpus, jet planes, or pencils. Matt Ridley's awesome TED talk on the subject.

Then, when you can split no more, you still don't need to count every transistor remaining in your budget. If you are in the data array team, you design a single bank, and copy paste that into the 32 banks on the chip. Inside the bank, you design a single cache row, and copy paste that into the 1024 rows in there, etc etc.

Then there's the actual manufacturing. Most videos show some kind of physical abrasive process being used at various stages during manufacture to grind down the top surface. I find this very difficult to believe - as would seem a very crude process - given the scale of the chip components involved.

Today, we have the ability to grind with the precision of individual crystal planes, or, to the accuracy of the width of one atom. How did we get that far? Start with grinding down to roughly visual accuracy, have strong economic advantages for having more accuracy, and have a lot of really smart people working on it for a century, always going after the thing that's currently stopping you from grinding just a little bit finer. That's how technological progress works.

As for the chip manufacture itself, we basically have only 5 tools:

1. Uniformly grind down to a plane.
2. Deposit an uniform layer of something on the surface.
3. Use a chemical to remove some specific coating from the surface.
4. Use a visual mask to do lithography -- basically, shine an UV light on the die, and put a mask between the light and the die so that you can have structures that are not exposed. With fancy optics, you can make the mask a *lot* bigger than the die, which helps a lot in making the damn things.
5. Bombard the die with ions, such as for doping the silicon.

With these, it's possible to build almost anything, given enough steps. Let's say you want to build a layer of wires, with the surface of the die presently mostly coated with the interconnect dielectric, with the tops of the vias that lead to the metal layer below sticking out.

You start by coating the whole chip uniformly with photoresist. Then, use a lithography mask to expose the parts where the wires are going to be to UV. Then, use a chemical that attacks the parts of the photoresist that have been exposed to UV (but doesn't attack the parts that haven't), to remove the "channels". Then, uniformly coat the whole chip with copper. Plane the chip down slighly below the top level of photoresist. Use another chemical to remove the rest of the photoresist. Coat entire chip with dielectric. Plane down so that the copper is visible. You get how this works?

Most modern chips have dozens of individual mask steps, and always several "uniform" steps between them.

I can't follow how it would be possible to reliably create a processor with so many billions of components - 100% functional.

Depends on your definition of reliable. We absolutely cannot reliably produce chips with billions of components, where all of them are 100% functional. What we do is build a lot of functional redundancy -- instead of putting in 16 cache banks, put in 17, and the logic to replace one of the lines with the spare one if doesn't pass the tests. This is again done at multiple levels -- for individual rows, the banks, and in the end for the whole chip, where we "harvest" 2-core chips out of 4-core dies when they have an error in some logic that isn't easy to make redundant.

And even with all that redundancy, it's perfectly normal to ship product when 80% of the chips that come out of the assembly line are complete duds. I believe that GTX480 had <1% yields when it first shipped...
 

veri745

Golden Member
Oct 11, 2007
1,163
4
81
Others above have basically state it, but it all comes down to layered complexity.

At the base level, you have transistors, but transistors are used to create generic building blocks (logic gates, flops, etc)

Those building blocks are used to create simple logic functions. Simple logic is used to create advanced logic and state machines.

Then you make individual IP blocks (cache, ALUs, PHYs, etc)

Then those are all pieces together to make higher order blocks. (Cores, Northbridge, Memory controller, etc)

And finally there is an SoC team that puts all of those block together to make a chip.
 
Status
Not open for further replies.