• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

IntEl sub 22nm roadblocks and delays ahead?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
If smaller dies keep having greater heat problems like 22nm did, I am in no rush for more shrinks.
 
If smaller dies keep having greater heat problems like 22nm did, I am in no rush for more shrinks.

What are the heat problems you're referring to? You mean in terms of overclocking?

For everyone else, I thought IB was a step forward in terms of power consumption.
 
What are the heat problems you're referring to? You mean in terms of overclocking?

For everyone else, I thought IB was a step forward in terms of power consumption.


It's a step forward for everyone in regards to power consumption.

Actually, I retract my last post as the temp issues seem to all stem from the heat dissipation design, as we've shown when we give it a little help.
 
Actually, I retract my last post as the temp issues seem to all stem from the heat dissipation design, as we've shown when we give it a little help.

Regardless of what part of the CPU design caused it, the bottom line is moving to 22nm caused higher CPU temps than on 32nm. If it is as simple as fixing the heat dissipation design, then fantastic. My only question is if it was so simple, why did Intel not do it for IB?
 
Regardless of what part of the CPU design caused it, the bottom line is moving to 22nm caused higher CPU temps than on 32nm. If it is as simple as fixing the heat dissipation design, then fantastic. My only question is if it was so simple, why did Intel not do it for IB?

The simplest and most cynical answer is that they had no incentive to do so. It's not like they benefit from overclocking, and as long as the chips run "within specified operating parameters" (nod, Commander Data) then they are happy.

This may well be one of the most direct impacts of AMD's lack of competitiveness in the enthusiast market.
 
Regardless of what part of the CPU design caused it, the bottom line is moving to 22nm caused higher CPU temps than on 32nm. If it is as simple as fixing the heat dissipation design, then fantastic. My only question is if it was so simple, why did Intel not do it for IB?

People that care about it can be counted in 1000s. While everyone that dont care can be counted in 100s of millions. Including myself.

Essentially its only a 4.5Ghz+ issue. And limited issue as well.
Tjmax was also increased 5C.
 
if it was so simple, why did Intel not do it for IB?

Because it is also more expensive, and of no benefit to 99.9% (number from my arse!) of their customers. The weakness of the current design is not apparent until you run it out of spec. I don't really expect Joe Q Dellbuyer to have to pay a bit more for his PC to subsidize my hobby.
 
Because it is also more expensive, and of no benefit to 99.9% (number from my arse!) of their customers. The weakness of the current design is not apparent until you run it out of spec. I don't really expect Joe Q Dellbuyer to have to pay a bit more for his PC to subsidize my hobby.

Agreed, but my original statement about 14nm and beyond still stands. This problem was not that big of a deal with IB, however, if the issue gets worse going forward, it will impact more people. That was my only point. I am confident that Intel will work to make sure that does not become reality.
 
Agreed, but my original statement about 14nm and beyond still stands. This problem was not that big of a deal with IB, however, if the issue gets worse going forward, it will impact more people. That was my only point. I am confident that Intel will work to make sure that does not become reality.


Well, I'm not sure how much of it is really an issue because even without solder, when we swap the IHS, our temps are pretty similar to SB I think.

Even with 1.26V peak, my proc never has a core above 66 or 67C and that's in linpack.
 
..... Essentially its only a 4.5Ghz+ issue. And limited issue as well.
Tjmax was also increased 5C.

Does the Ivy have the same longevity/reliability as the Sandy despite the increased tjmax temps? From layman's pov, shrinking the process node would seem that the chip would be more sensitive to operating voltages and temperatures.
 
What exactly do die shrinks do for CPU performance? Lower voltage and Heat?

Several good things GENERALLY happen during a process shrink.

Physical size gets smaller (save some $$$ there or you can put more complexity/performance on the chip)
Devices get faster (higher clock speeds or you can save more power)
Voltages get lower (save power)
 
Several good things GENERALLY happen during a process shrink.

Physical size gets smaller (save some $$$ there or you can put more complexity/performance on the chip)
Devices get faster (higher clock speeds or you can save more power)
Voltages get lower (save power)
a smaller process lets you cram in more transistors in a square millimeter. so the same number of transistors takes up less die space.
a smaller die size mean you get more dies out of a wafer which means more $$$/wafer.
so there are financial as well as technical advantages.

mostly it's the decrease in leakage brought by a shrink which gives you lower power chips (electrical, not computational) and faster switching speed of the transistors themselves which gives you higher frequencies.

but as the processes becomes ever so small, so the probability of errors and defects rises, devices that produce the chips get more complex and expensive etc'.
so the return of investment by a company is also a consideration when deciding whether to move to a new process. what we might see in the future is that chips get faster, but also rise steadily in price across all segments.

on the subject of heat, do we know if IB's heat comes mainly from the CPU part of the die (as opposed to the GPU) or is the heat spread evenly on the die?
would it have been better if Intel had put an iGPU several times bigger than the CPU (in terms of mm2)
to get a beefy iGPU while also increasing surface area and thus reducing heat density?
 
Last edited:
on the subject of heat, do we know if IB's heat comes mainly from the CPU part of the die (as opposed to the GPU) or is the heat spread evenly on the die?
would it have been better if Intel had put an iGPU several times bigger than the CPU (in terms of mm2)
to get a beefy iGPU while also increasing surface area and thus reducing heat density?
The heat for the most part comes entirely from the CPU side logic. The GPU can use a decent amount of electricity per mm^2, but I'd imagine most people who care have the IGP off or idling anyways.

With that in mind there are thousands of ways they can decrease temperatures better and cheaper than having a larger IGP just sitting there wasting space. They could duct tape a piece of rusty iron north of the cores for one. Honestly it would help multiple times more than increasing IGP size. Or spend even less and solder the IHS on like they have for years.
 
mostly it's the decrease in leakage brought by a shrink which gives you lower power chips (electrical, not computational) and faster switching speed of the transistors themselves which gives you higher frequencies.

Is this really the case? My understanding was that switching power went down with a die shrink, but that leakage actually becomes more of a problem as the feature size goes down, because of smaller thicknesses of insulators, and other related effects.

It could be that leakage increases as a percentage of total power loss as you shrink the node.
 
Is this really the case? My understanding was that switching power went down with a die shrink, but that leakage actually becomes more of a problem as the feature size goes down, because of smaller thicknesses of insulators, and other related effects.

It could be that leakage increases as a percentage of total power loss as you shrink the node.

Leakage scales with operating temperature, aggregate transistor width (the sum of all transistors in the IC), and operating voltage.

PtotalVccTGHz.png


There is no requirement that leakage increases with node shrinks, nor is there a requirement that the percentage of power lost to leakage increase as nodes shrink.

But it costs money and time to develop nodes that don't fall into that trend. So, if falling on that trendline for the next node under development does not look to be excessively problematic then the decision makers will choose the "why not?" option and invest their R&D dollars elsewhere in the pipeline.

For us outsiders, the laypeople, to all intents and purposes we can safely formulate a rule of thumb to guide our expectations that says "every node will have more leakage as a percentage of total power usage", but it behooves us to have the wherewithal to realize that this rule of thumb is not a cause-and-effect but merely an outside observation of the decisions made inside the company.

Those decisions can be made differently at any point going forward, so we ought to be prepared for that contingency.
 
There is no requirement that leakage increases with node shrinks, nor is there a requirement that the percentage of power lost to leakage increase as nodes shrink.

I'm not sure that is really true. From a simple logical standpoint, it makes sense that in a device that relies on an insulator to keep charges separate, the smaller you make the insulator, the less well it works. Technologies such as high-K dielectrics are developed to try to counter this trend, because without them, simply shrinking the process size leads to ever-increasing losses through leakage. We are already at the point where a conventional transistor gate insulator would only be a few atoms thick.

When it comes to power, making transistors smaller
and putting more of them into a small space is a
good news, bad news story. The good news: Smaller
transistors consume less power, and it takes less
voltage to drive them. The bad news: Increasing
density of ever faster transistors means the overall
chip consumes more power and generates more
heat. In addition, power leakage becomes more
problematic with shrinking feature sizes, wasting a
higher portion of total microprocessor power.
http://download.intel.com/museum/Moores_Law/Printed_Materials/Intel_Silicon_Brochure.pdf

20041116_virtual1.gif


(Graph from a different source, have seen a few variations but can't find the original at Intel yet.)
 
In the traditional sense of process scaling (just shrink all features), the primary source of power reduction is through voltage scaling and NOT leakage scaling. While modern transistors are designed to reduce leakage, TRADITIONALLY it's the dynamic power on the gate cap and wire cap that allowed us to reduce power.

Leakage is a complete mess of different variables. As we scale, channel length gets shorter (leakage worsens) but we can then use a small gate width too (leakage improves) but to get better channel control oxide thickness shrinks (gate leakage worsens) but then again the cross sectional gate area is smaller (gate leakage improves). On top of that voltage decreases (leakage improves) doping gets higher (lowers junction leakage?). So at the end of the day, only a process engineer can tell you what happened. The creation of HKMG and FinFET changed everything during that shrink.

As for requirements for leakage, again it's up in the air. The process team will go to the design team and ask "hey, you want the same transistor frequency but lower leakage or do you want faster transistors but equal leakage?"
 
Back
Top