the designing&building of modern microprocessors

Status
Not open for further replies.

mutz

Senior member
Jun 5, 2009
343
0
0
hi there,
been quering about this endlessly,
it seems there is nowhere at the web any info about how exactly those microprocessors manufacturers do manage to produce such aninpercievable work,
not at the manufacturers site, wiki, rpivate sites and even at the scirus search engine u guys recomended.
maybe at the IEEE site, but it involves payments and so on.
if anyone can share info regarding how those microchips are maid, how exactly anyone manages to sketch (or whatever) a billion trassistor CPU :shocked: in the aspect of the requierments for this kind of operation,
does it involve viewing sketches over some 200 inch displays in order to be able to design each gate or they design a logic gate and then multiply it by thousands,
if there is any meaning for every transsistor on the die and what happens if some do not work as a manufacturing bug,
if they cut the die and then print the circuit above it,
every detail, it is very confusing and missunderstood.
regards :).
 

mutz

Senior member
Jun 5, 2009
343
0
0
That should explain the process pretty well.
it gives a pretty good explanation but doesn't actualy answer all the queries..
thanks anyhow ;).
 
May 11, 2008
22,606
1,473
126
Originally posted by: mutz
hi there,
been quering about this endlessly,
it seems there is nowhere at the web any info about how exactly those microprocessors manufacturers do manage to produce such aninpercievable work,
not at the manufacturers site, wiki, rpivate sites and even at the scirus search engine u guys recomended.
maybe at the IEEE site, but it involves payments and so on.
if anyone can share info regarding how those microchips are maid, how exactly anyone manages to sketch (or whatever) a billion trassistor CPU :shocked: in the aspect of the requierments for this kind of operation,
does it involve viewing sketches over some 200 inch displays in order to be able to design each gate or they design a logic gate and then multiply it by thousands,
if there is any meaning for every transsistor on the die and what happens if some do not work as a manufacturing bug,
if they cut the die and then print the circuit above it,
every detail, it is very confusing and missunderstood.
regards :).


does it involve viewing sketches over some 200 inch displays in order to be able to design each gate or they design a logic gate and then multiply it by thousands, if there is any meaning for every transsistor on the die and what happens if some do not work as a manufacturing bug,




For as far as i know nowadays the chip manufacturers use a design language that build basic functionality by the use of libraries. Part of these libraries are made by the chip designer. The other part is delivered by the manufacturer that sell's the equipment to make the chips. When the chip after simulation seems to work they do some "hand tweaking" to improve performance. Hand tweaking might as well be running simulations non stop while changing some of the characteristics of certain transistors in critical paths.
When they are happy with the result, they have to translate this to data that tells the machine that actually makes the chips how to light the silicon, what chemicals to add and for how long. This data is fed to the machine and the machine starts to work. These machines are for example build by ASML. This is afcourse oversimplyfied but that is because i do not know all the details.




Not all the transistor in the chip are the same i think. I do know that they use fieldeffect transistors and the fun part of these transistors is that you can put them in parallel easy to boost current. And the more current, the faster you can switch. This is not entirely true and depends on a lot of other variables like for example applied voltage as well.

a qoute about the defect in the ATI/AMD R520 gpu.

ATI were open about talking about the issue they faced bringing up R520, sometimes describing the issue in such detail that only Electronic Engineers are likely to understand, however their primary issue when trying to track it down was that it wasn't a consistent failure - it was almost random in its appearance, causing boards to fail in different cases at different times, the only consistent element being that it occurs at high clockspeeds. Although, publicly, ATI representatives wouldn't lay blame on exactly were the issue existed, quietly some will point out that when the issue was eventually traced it had occurred not in any of ATI's logic cells, but instead in a piece of "off-the-shelf" third party IP whose 90nm library was not correct. Once the issue was actually traced, after nearly 6 months of attacking numerous points where they felt the problems could have occurred, it took them less than an hour to resolve in the design, requiring only a contact and metal change, and once back from the fab with the fix in place stable, yield-able clockspeeds jumped in the order of 160MHz.



High-level_synthesis

Register_transfer_level

Stepper

Photolithography


These processors now a days have a lot of extra transistors for redundancy. It might as well be that a cpu with 1MB cache really has 1,1MB cache for example. WHen some cache lines fail, they can deactivate the part of the cache that is damaged and activate the redundant part of the cache. Now this causes some rerouting and i guess this also is a deciding factor how fast a cpu will clock. The less defects you have the higher it will clock. But again this is not the only deciding factor.


When it comes to the actuall cpu, they have some redundancy build in as well and can replace some blocks but not much. Sometimes the instructions the cpu has to execute do not function as designed and they can use microcode to adjust it. But this is usually only possible for complex instructions. With a microcode update, when your pc boots, the cpu executes the bios instructions and loads the microcode into a special high speed ram inside the cpu. This ram is linked to the logic that executes the basic assembly instructions.



 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Originally posted by: mutz

does it involve viewing sketches over some 200 inch displays in order to be able to design each gate or they design a logic gate and then multiply it by thousands

lol.. it's called using the zoom feature. But yeah if we ever had the need to see individual poly lines of every gate AND the entire chip, we'd probably need something bigger than a 200inch display.
 

mutz

Senior member
Jun 5, 2009
343
0
0
Very Very interesting post mr.William, also the ATI quoted part, it is amazing, the level of proffesionality these guys reach!!
amazing!

lol.. it's called using the zoom feature.
don't start with this "crap"..:laugh: man :laugh:
thank u both for you'r adds.
 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
For the Front end it can be summed up as the following:

(1) Architecture
(2) Design of Logic with SystemVerilog/Verilog or VHDL
(3) Simulations after Simulations (no timing)
(4) Synthesize Design (from HDL language to netlist with libs)
(5) Simulations after Simulations (with timing) sometimes called Gate Level Simulation

I'll leave the back end info to people that know more about it than I . But at this stage the chip is done functionally, More tools and speed path work is done to place and route the synthesized design (libraries) and the masks... Like i said there is other work done at this stage, but it isn't some one sitting done placing a transistor by transistor down.. I am sure there are cases such that may be the case but it is slim in the "typical" digital circuit.

this may help also:
http://www10.edacafe.com/book/...=transeda%2Fch-04.html

(I am not speaking on behalf of any company)
 

CanOWorms

Lifer
Jul 3, 2001
12,404
2
0
Originally posted by: mutz

if there is any meaning for every transsistor on the die and what happens if some do not work as a manufacturing bug,

This is extremely common when fabricating a chip. Many chips are stressed (voltage, temperature, etc.) before being shipped to customers so that many weak or defective devices are screened out. Some customers do even more additional testing to weed out failing or out-of-spec devices.

Time-zero failures are to be expected, but one which occurs in the field is potentially a major issue.

However, it is still always possible for 1 single transistor (or a non-transistor localized defect) to fail causing an entire device to stop working properly. It really depends on the design and where the failure occurs, what type of failure it is, etc.
 

mutz

Senior member
Jun 5, 2009
343
0
0
there was a document about AMD, saying that they take a quad core processor and test it's abillities, if one of the core isn't working properly, they sell it as a three core chip, if another is miss working they sell it as dual core and as one core respectively.
by what different posters here says, it is actually possible to control different areas of the product, and it's actually programable and changeable - tweakable at the creation procedure and even after it has been made, this is simply unbelievable being able to "talk" with a machine!
as for what mr. William has pointed -High-Level_synthesys-
High-level synthesis (HLS), sometimes referred to as C synthesis, electronic system level (ESL) synthesis, algorithmic synthesis, or behavioral synthesis, is an automated design process that interprets an algorithmic description of a desired behavior and creates hardware that implements that behavior.
- Wiki.
it is actually possible to create a machine, that by a specificially implemented software algorithm, can calculate different portions of a chip and even create it!!
this is simply impercievable, they have some sort of a machine that takes an algorithm and builds a chip by itself out of it,
it seems that the engineers, don't have to do almost anything by them selfs!!
i'd like to watch this happen.
as for the specific design, still, what does it mean libraries exactly..?
could never realy grasp the idea.
 

Born2bwire

Diamond Member
Oct 28, 2005
9,840
6
71
From a manufacturer's point of view, outside of the differences in yields, it pretty much costs the same thing to create any chip that has the same wafer size and footprint. If I want to produce a chip for a line of radios, it is cheaper to design and create a single chip. This way, I only have one product to design, produce, test, etc. To deactivate the features for the lower end products of the line, I can simply disable the features using software or physically disable those portions of the chip by either using a chip that has a bad section or by destroying that section. A lot of chips have high current fuses. When a manufacturer wants to disable a section (to either downgrade the features or because that section was not up to tolerances), they can run a current through the necessary fuses and permanently blow them open. Sometimes a manufacturer will place extra sections on a chip to have backups in case one or more of the sections fail. The speed of chips often relies on the quality of their production, so when you buy a Core 2 Duo chip, most of the chips are all the same despite the different clock rates. The difference is that the chips rated for a lower clock did not come out well enough to pass the tests at higher frequencies. They could also just use chips that were rated for higher frequencies in the lower end clock rates as well. It costs them the same to produce them, the difference is the yields. So sometimes the binning of higher clock rates into lower clocked chips, due to good yields, or the fact that the reason why a chip was binned with a lower clock was that it failed some minor test allows people to successfully overclock their chips.

EDIT: Chips by themselves are too complex to design by hand. But they are built up in sections. What usually happens is that the basic components are laid out by hand. An engineer will design a very good AND, OR, NOR, NAND, etc gates. They use these basic building blocks to build more complex but necessary units (or they will build them from scratch too) like an adder. By creating a basic library of components, the designers have a toolkit of prefabricated building blocks. These building blocks are used by the computer to design the chip. Critical paths and sections can still be done by hand, but algorithms are used to optimize the placement of sections, the routing of the interconnects, and other tedious but very large parts of the design. Lord Banshee mentioned using Verilog/VHDL but what I am talking about in this edit is the very low level chip design where you layout the actual transistors and interconnects. This is done using VLSI.

But what it all comes down to is that chips, whether designed low level using VLSI or something similar, or using a higher level digital logic programming language like Verilog, are built out of repetitive building blocks. A 16 bit adder is nothing more than 16 one bit adders hooked up together. A cache is built up of smaller units. This allows the designers to make far more complex designs by letting the computers decide how to arrange all these basic blocks.
 

mutz

Senior member
Jun 5, 2009
343
0
0
From a manufacturer's point of view, outside of the differences in yields, it pretty much costs the same thing to create any chip that has the same wafer size and footprint. If I want to produce a chip for a line of radios, it is cheaper to design and create a single chip. This way, I only have one product to design, produce, test, etc. To deactivate the features for the lower end products of the line, I can simply disable the features using software or physically disable those portions of the chip by either using a chip that has a bad section or by destroying that section. A lot of chips have high current fuses. When a manufacturer wants to disable a section (to either downgrade the features or because that section was not up to tolerances), they can run a current through the necessary fuses and permanently blow them open. Sometimes a manufacturer will place extra sections on a chip to have backups in case one or more of the sections fail. The speed of chips often relies on the quality of their production, so when you buy a Core 2 Duo chip, most of the chips are all the same despite the different clock rates. The difference is that the chips rated for a lower clock did not come out well enough to pass the tests at higher frequencies. They could also just use chips that were rated for higher frequencies in the lower end clock rates as well. It costs them the same to produce them, the difference is the yields. So sometimes the binning of higher clock rates into lower clocked chips, due to good yields, or the fact that the reason why a chip was binned with a lower clock was that it failed some minor test allows people to successfully overclock their chips.

EDIT: Chips by themselves are too complex to design by hand. But they are built up in sections. What usually happens is that the basic components are laid out by hand. An engineer will design a very good AND, OR, NOR, NAND, etc gates. They use these basic building blocks to build more complex but necessary units (or they will build them from scratch too) like an adder. By creating a basic library of components, the designers have a toolkit of prefabricated building blocks. These building blocks are used by the computer to design the chip. Critical paths and sections can still be done by hand, but algorithms are used to optimize the placement of sections, the routing of the interconnects, and other tedious but very large parts of the design. Lord Banshee mentioned using Verilog/VHDL but what I am talking about in this edit is the very low level chip design where you layout the actual transistors and interconnects. This is done using VLSI.

But what it all comes down to is that chips, whether designed low level using VLSI or something similar, or using a higher level digital logic programming language like Verilog, are built out of repetitive building blocks. A 16 bit adder is nothing more than 16 one bit adders hooked up together. A cache is built up of smaller units. This allows the designers to make far more complex designs by letting the computers decide how to arrange all these basic blocks.
very well explained!
obviouslly the VLSI is used on a different level than Verilog/HDL, and probably on the same level as Register Transfer Level is used,
from "lord banshee"'s forward, #IP in the design flow -
Modern designs have become so large that a single device can be a whole system on a chip (SoC). It is for this reason that many chip design teams now use IP (intellectual property). These are cores of functionality which have been already written in a HDL and can be easily included in a design. IP cores can be devices that were once large ICs; for example - the Pentium processor. The idea behind using IP blocks is to save on design effort. In actual fact the cost of using an IP block is between 30% and 100% of the time to design the block from scratch depending on the quality of the IP core. IP blocks are always written in RTL so that they can be synthesized.
this adds to it, they can actually take an old CPU design, and implement its libraries on a new one without the need to redesign it all from-scratch..!!
i'm not sure how it's called, when a technological invention is essentialy needed in order to build the next step!
this defenitly simplyfies everything, it seemed much more complexed.
there must be some presentation on this subject or atleast about the different stimulations,
lol.. what shall be now.. it seemed much more interesting before :laugh:
after u get the basics it seems u can explain it all in an hour, to someone who wishes to understand.. :laugh:
almost a year of research..,
well,
thanks alot, it seems to quite solve the matter. :)
http://en.wikipedia.org/wiki/File:Dunnington_Xeon.jpg
a nice pic' for mem'.:)
u all contributed some pieces each :)
greatly appreciated :),
absolutely the best forum to be. :thumbsup:
 
Status
Not open for further replies.