brybir
Senior member
- Jun 18, 2009
- 241
- 0
- 0
I see where the confusion is now. I wasn't talking about disabling extra sp's to increase yields what I meant was AMD seams to be able fit 110-120 xtors in the same space that NV fits 100. From what Scali said there was some positive reason why NV chose to only fit only 100 xtors in that space.
Without getting overly complicated it is possible on any given process to have significant sub-micron defects for various reasons. Often times, these defects are the result of numerous factors, but are often exacerbated by things like voltage, current, temperatures, and EM field fluctuations etc...which ultimately cause what may be a trivial defect into a big problem. By having lower densities it is possible to avoid some problems caused by the stresses related to more dense placements (high density areas can create "hot-spots" which cause surrounding areas to fail, for example). I sort of think of it as having a lower density allows the part to "breath" better, even though its an absolutely terrible analogy...it works in my head.
Also consider that the problem of transistor placement (and by association, density) is a balancing act of meeting your goals and making money. Within those two factors is essentially a constrained optimization problem in that the constraints are the wire-ability of the design, wire length for interconnects and total size required by the transistors (among many other issues). What this all means is that a more complex part will often require more complex routing, wiring and strategic placement decisions in order for additional operations to be complete. If you have this complex part, it may serve you better to have a lower density in order to accommodate the other associated parts of the design more easily.
In the case of Fermi, they added quite a bit of logic to the design in the HPC area. These changes were fairly significant redesigns of the core logic and certainly made the IC design more complex. This complexity likely led to a lot of design and manufacturing trade offs that ultimately resulted in the parts we can purchase today. Perhaps one issue they were facing was a problem, as talked about a bit ago, involving "hotspots" which is what occurs when certain parts of the IC heat up from use while the other areas are not engaged and so are cooler. It could be that Fermi, in certain applications at some point, was having hotspot issues such that their design engineers predicted that issue would cause a certain amount of process failures at manufacturing or would reduce the life of the IC and increase warranty claims. To combat that hotspot perhaps they spread things out a bit on the IC as best they could. But, as stated above, the IC design is a constrained optimization problem, so when you move one thing, everything else has to go with it. Maybe you end up with a bit bigger chip. Maybe you end up with removing features, or maybe you end up with a more or less dense chip after all is said and done.
Point being, in the end, each issue that comes up has to be addressed, and the more significant ones requires revisions of the design. Those revisions, over time, ultimately dictate the density you see in ICs. In that sense then, density is often a goal (whatever that may be for any given IC), but at the same time it is often something that follows function and practicality in order to achieve a working part at the price and time frame that it is desired.