Broadwell GT3 48EUs? TDP range 4.5W-47W

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
Intel made GT2 iGPUs far more popular with Haswell than previous gen Ivy Bridge. GT3e Broadwell-K might be the next step. I believe there will be far more GT3 processors next year (Broadwell) than right now (Haswell). Intel might not have decided if its worth launching a GT4 version Broadwell yet, as GT3e might be more than enough to beat DDR3-bound desktop Kaveri/Carrizo and please Apple (2014 Macbook Pro & iMac), but I think such version definitely comes in 2015 with Skylake. Interesting times ahead. :)
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
If they had a bottomless pit of money to burn, yes.
The idea of massively improved integrated graphics being widely available from Intel should excite you.

Anyway, no, they won't be burning tons of money. You can thank Apple for that -- they're subsidizing the cost of these for enthusiasts.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
A bit OT, but I have to say, looking at the info on Broadwell and Intel's 14nm process - I'm starting to get more excited about Broadwell-E than I am about Haswell-E. It seems likely that BW-E will clock faster than HW-E - so maybe no IPC gain, but higher throughput anyway would be a bonus :thumbsup: I assume they'll both be using the X99 chipset and the same socket.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,379
651
126
Sounds interesting. Care to share some more details on why you think that is the case, and what clocks are reasonable to expect? I'm wondering, because the latest node shrinks have resulted in nearly no CPU frequency increase at all.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Sounds interesting. Care to share some more details on why you think that is the case, and what clocks are reasonable to expect? I'm wondering, because the latest node shrinks have resulted in nearly no CPU frequency increase at all.
Intel's 22nm did not bring higher CPU frequency since multigate device performance is poor at high voltages. If you look at the lower end of the spectrum, e.g. Bay Trail, the clock gains were tremendous. Their 32nm process was a tremendous improvement over their 45nm process, which was a tremendous improvement over their 65nm process. Prior to that, transistor performance had stagnated -- we're in a bit of a golden age right now, and it's only going to get better.

If you look at AMD, their 32nm PDSOI process was DOA, and had barely improved transistor performance over 45nm. This was IBM's fault. As far as 28nm goes, the performance hit is from regressing from PDSOI to planar, and is no fault of the 28nm process itself. It doesn't have anything to do with generational improvements tapering off -- in fact, recent nodes have given substantial gains. It's all just a bit muddy because of PDSOI, and if you compare 45nm planar to 32nm planar and 28nm planar, you'd see the kind of gains that are supposed to be there. But don't try, because the CP planar numbers aren't easily accessible... believe me, I've been looking for them. But do keep an eye out for their 20nm process, which will receive the benefits of going from gate first to gate last.

TSMC's one place where everything has stayed uniform, and you can definitely see that today's 28nm GPUs clock higher than 40nm GPUs. HKMG provided a tremendous boost for overclocking, especially when you do a "proper" gate last implementation. However their 20nm process definitely tapers off the performance gains, and it's not nearly the improvement that their 28nm was relative to 40nm.

Back to Intel, I'm guessing he saw this chart that I linked earlier. As I've pointed out before, the FinFET overclocking penalty is only applied once, and that's when you move from a planar process to a FinFET one. We're moving from a FinFET process to another FinFET process with the move to 14nm, so we'll see the overclocking gains of yore, barring any unforeseen problems. 10nm brings even greater improvements; that is if Intel can hit the projected replacement of a silicon channel with SiGe or Ge.
 

jpiniero

Lifer
Oct 1, 2010
16,989
7,389
136
We're moving from a FinFET process to another FinFET process with the move to 14nm, so we'll see the overclocking gains of yore, barring any unforeseen problems.

Between the Heat Density and Intel optimizing for low power, don't count on it. At a given TDP, Intel might be able to get clock speeds higher but forget it once you get much above 4.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Heat density is a red herring. As far as optimization goes, they're still using a high performance process, not a low power one.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Back to Intel, I'm guessing he saw this chart that I linked earlier. As I've pointed out before, the FinFET overclocking penalty is only applied once, and that's when you move from a planar process to a FinFET one. We're moving from a FinFET process to another FinFET process with the move to 14nm, so we'll see the overclocking gains of yore, barring any unforeseen problems. 10nm brings even greater improvements; that is if Intel can hit the projected replacement of a silicon channel with SiGe or Ge.

Thanks Homeles. Yes, this chart and another similar pair (larger, but I can't find them right now).

Between the Heat Density and Intel optimizing for low power, don't count on it. At a given TDP, Intel might be able to get clock speeds higher but forget it once you get much above 4.

If you look at the "Switch Energy vs Gate Delay" graphs (and one has to pay attention to the blue arrows to see what the alignment is, trixy Intelsis): it is shown that the gate delay will be significantly lower, even at lower power usage, for 14nm compared to 22nm.

Looking at the lower left chart, if we are able to increase vCore (roughly speaking) by 25% we should see the gate delay drop by ~40%!!! That would be a healthy 60% increase in the core clock rate. To put it another way, even if Broadwell can *only* hit 3.0 GHz, the overclock could be ~5.0 GHz. If Intel can still deliver 3.5 GHz for its top SKU, we'd be talking about a possible overclock of 5.6 GHz :awe:

The caveat is that we don't have any switch energy vs frequency chart - that would be like Intel giving away the store; because we could draw more exact conclusions with that addition info. Still, things look very promising, both for Broadwell-LGA and Broadwell-E. I would make my day if Intel was releasing a Broadwell-E SKU @ the max workstation TDP of 160W :)


P.S. Switching energy drops an amazing 67% at a given clock rate - that is roughly in line with the areal shrinkage that will come with 14nm - so the heat density shouldn't go up (maybe down a bit). Hopefully, Intel will have a better TIM/heat spreader solutions that will increase the heat flux from the CPU to the HSF.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,989
7,389
136
If you look at the "Switch Energy vs Gate Delay" graphs (and one has to pay attention to the blue arrows to see what the alignment is, trixy Intelsis): it is shown that the gate delay will be significantly lower, even at lower power usage, for 14nm compared to 22nm.
:)

Intel made the same kind of claims when talking about 32 vs 22, and that only really held up at lower TDP/clock speeds.

BTW, the chart with the $/transistor makes it look like Intel is going to get 450 mm wafers and EUV in at 10 nm, which seems tough to imagine at this point.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Intel made the same kind of claims when talking about 32 vs 22, and that only really held up at lower TDP/clock speeds.

BTW, the chart with the $/transistor makes it look like Intel is going to get 450 mm wafers and EUV in at 10 nm, which seems tough to imagine at this point.

IIRC, the 32nm vs 22nm nearly converged at higher switch energies; I'll try to find a link.

I don't think either 450 mm wafers or EUV will be happening @ 10nm, both have been delayed, last I read.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
BTW, the chart with the $/transistor makes it look like Intel is going to get 450 mm wafers and EUV in at 10 nm, which seems tough to imagine at this point.
Jul 9 2012:Intel and ASML Reach Agreements to Accelerate Key Next-Generation Semiconductor Manufacturing Technologies:

Intel commits €829 million (approximately $1.0 billion) to ASML's research and development programs to help accelerate deployment of new technologies for 450-millimeter (mm) wafers and extreme ultra-violet (EUV) lithography by as much as two years

Apr 19 2013: ASML On Track to Deliver 450mm Production Equipment in 2015.:
The company believes that its partners, such as Intel Corp. Samsung Electronics and Taiwan Semiconductor Manufacturing Co. are on-track to start commercial 450mm production in 2018.

As far as I know, Intel is planning to start using 450mm wafers around 2018 (= 7nm).

I also found this article: Okt 19 2013: Intel Will Not Reconsider Timing for 450mm Manufacturing.

"We have not changed our timing [regarding 450mm]. We are still targeting the second, latter half of this decade." --Brian Krzanich

@Ajay: According to my links, nothing has been belayed.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Jul 9 2012:Intel and ASML Reach Agreements to Accelerate Key Next-Generation Semiconductor Manufacturing Technologies:



Apr 19 2013: ASML On Track to Deliver 450mm Production Equipment in 2015.:


As far as I know, Intel is planning to start using 450mm wafers around 2018 (= 7nm).

I also found this article: Okt 19 2013: Intel Will Not Reconsider Timing for 450mm Manufacturing.

"We have not changed our timing [regarding 450mm]. We are still targeting the second, latter half of this decade." --Brian Krzanich

@Ajay: According to my links, nothing has been belayed.


Thank you for the links witeken, especially the last one.

I guess many of us expected 450mm wafers & EUV @ 10nm (going back a year or two). Hence the reason some are saying it's a delay and Intel is not. It's a moot point I suppose, Intel may never have had a hard date on either - since both are very complex transitions. It's good to read that D1X module 2 is still on schedule.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Slightly off-topic: It's indeed a very complex and expensive transition:

Other initial technical problems in the ramp up to 300 mm included vibrational effects, gravitational bending (sag), and problems with flatness. Among the new problems in the ramp up to 450 mm are that the crystal ingots will be 3 times heavier (total weight a metric ton) and take 2-4 times longer to cool, and the process time will be double. All told, the development of 450 mm wafers require significant engineering, time, and cost to overcome.

It is estimated that the transition to 300mm has cost 15 to 20 billion dollars for the whole industry, 450mm will obviously be even costlier.

There are also some, in my opinion, other interesting improvements from 450mm except from the obvious things like higher throughput. Because the wafer is bigger, there is less space that is unused, so yields will improve. 450mm offers the cost equivalence of a 1 percent yield increase (image shows the idea):
Wafer_die's_yield_model_(10-20-40mm)_-_Version_2_-_DE.png


Another interesting thing is that this transition will allow some fab and environmental challenges/improvements:
Wafer transitions offer one of the rare periods when new approaches can be developed and integrated into facilities plans. During the 300mm transition, significant developments occurred in factory automation and wafer handling. Similarly, the 450mm transition is a window to update the industry approach to a number of fab systems. Rising energy costs, water scarcity, and climate change will continue to present both challenges and opportunities for semiconductor manufacturing in the 450mm era. These sustainability concerns are driving demand for tools that can more reliably and cost-effectively achieve a shared vision of resource balance.
Along with cost and efficiency improvements, IC makers and consortia driving the transition to 450mm manufacturing expect to achieve similar or better environmental performance. Larger footprints and resource demands from 450mm facilities in conjunction with mandates for environmentally aware operations are compelling fabs and suppliers to consider sustainability and systems integration at greater levels than ever before.

large_diameter_img_02.jpg

Image of Global Foundries:
GF-2.png

(x-axis = process node, first green dot corresponds to the 10nm node)
450mm-wafercosts.jpg

450mmwaferramp.png


I found this article from 2008, it states that the transition was planned for 2012, more than a year ago. It seems that there were indeed some delays from the initial announcement and 450mm will start 50% later than first planned.
May 6 2008 Intel, Samsung, TSMC Plan Shift to 450-mm Wafers
 

Khato

Golden Member
Jul 15, 2001
1,320
391
136
IIRC, the 32nm vs 22nm nearly converged at higher switch energies; I'll try to find a link.

I don't believe such was shown on any official gate voltage vs delay graphs - those only went up to 1V. All the graphs I've seen that go beyond that point are based on extrapolation.

And unfortunately none of the information that I've seen regarding the 14nm process provides any assurance that it will be 'better' for overclocking than the 22nm process. All of the charts Intel released at their recent investor meeting - http://intelstudios.edgesuite.net/im/2013/pdf/2013_IM_Holt.pdf - only show switching energy versus gate delay. Now given the range of percentage change for the 22nm curve it would appear to be the same as the 0.6V-1V and 0.8 to 1.4 normalized gate delay curve that was used for comparison against the 32nm process. But even if it is, we don't know how much the other parameters which change between the processes affect switching energy - the highest switching energy data point for the 14nm process could be running at 0.9V just as easily as it could be at 1V. Which is to say that 14nm could behave similarly to 22nm at voltages beyond 1V or it could be even worse.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I don't believe such was shown on any official gate voltage vs delay graphs - those only went up to 1V. All the graphs I've seen that go beyond that point are based on extrapolation.
There's some older, official data from Intel that shows the issue clearly. It's back when they called them MuGFETs, which is a term that's not used often anymore.

I'll see if I can find it and edit this post later.

http://intel.ly/Kld3Jd

"Bad news: Improved voltage scaling (from lower VT) is
also associated with increased Ioff (from improved DIBL)"

Intel ended up achieving better than that in practice, but it is an inherent consequence of moving to multigate devices.
 
Last edited:

Khato

Golden Member
Jul 15, 2001
1,320
391
136
There's some older, official data from Intel that shows the issue clearly. It's back when they called them MuGFETs, which is a term that's not used often anymore.

I'll see if I can find it and edit this post later.

http://intel.ly/Kld3Jd

"Bad news: Improved voltage scaling (from lower VT) is
also associated with increased Ioff (from improved DIBL)"

Intel ended up achieving better than that in practice, but it is an inherent consequence of moving to multigate devices.

Nice find - definitely an interesting read throughout, though I sure won't claim to fully understand the entire contents. (There was a time, but since I don't use such at work I've forgotten the majority of the content from the semiconductor properties courses that I took years ago.)

Now that I actually think about what's happening in the channel though it does make perfect sense why such would be the case. With the planar transistor increased gate voltage can create a 'deeper' channel between the source and drain. So at the design voltage the transistor would be sized to provide a certain drive current, but by increasing the gate voltage the effective resistance of the channel drops and drive current increases. Whereas with the finfet the design voltage likely causes the majority of the 'fin' to become the channel, so there isn't much more semiconductor material for increased gate voltage to have an effect upon. Which would explain why increasing voltage beyond a point with 22nm only sees slight gains - transistor drive current is only increasing due to the higher voltage, whereas with planar it was increasing due to both higher voltage and the slight reduction in transistor source to drain channel resistance.

All that sound about right? If so, it certainly doesn't bode well for 14nm making a dramatic change in overclocking potential.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
There's some older, official data from Intel that shows the issue clearly. It's back when they called them MuGFETs, which is a term that's not used often anymore.

I'll see if I can find it and edit this post later.

http://intel.ly/Kld3Jd

"Bad news: Improved voltage scaling (from lower VT) is
also associated with increased Ioff (from improved DIBL)"

Intel ended up achieving better than that in practice, but it is an inherent consequence of moving to multigate devices.

Ugh! I remember IDK being surprised at how well Intel did with it's 22nm FinFETs. Well, we'll just have wait and see if Intel is able to achieve and even better result in 14nm or not :(

Ioff is higher (compared to planar), but has better scaling with increased Vcc. Kuhn mentions that this can be manipulated by doping - but I'm not sure if she means doping of the Gate? The fins are already fully depleted. I didn't think Intel was going to be making any big material changes till 10nm; but I wonder if they did with 14nm as a contingency (would explain the sudden drop yields) - actually, no, that would be a reset of the yield optimization process, not just a drop I would think. Man, my head is swimming - peace out!

Nice find - definitely an interesting read throughout, though I sure won't claim to fully understand the entire contents.

+ eleventy! Saved to disk. I love this stuff, but would need to get a master's in device physics (or something similar) to really understand it and a PhD to do anything remotely interesting in the field. That's not part of my career plans, so it's not going to happen. From what IDK has said in the past, the hours are even more brutal than they are in Software Eng.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I find this stuff by just googling random crap and restricting the filetype to PDFs :p