Intel Broadwell Thread

Page 41 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

liahos1

Senior member
Aug 28, 2013
573
45
91
Here is a huge Core M review: http://www.notebookcheck.com/Im-Test-Intel-Core-M-5Y70-Broadwell.129544.0.html


Lots of gaming benchmarks. Performance isn't groundbreaking, Haswell-U is much faster.


Interesting frequency log from Dota 2: http://www.notebookcheck.com/fileadmin/Notebooks/Sonstiges/Prozessoren/Broadwell/dota2.png


GPU runs roughly at 400 Mhz only most of the time and CPU only 800 Mhz. No wonder performance isn't great. For a consistent performance over several minutes and longer Broadwell requires 10+ watts it seems.

seems comprehensive wish it was in english though. this is the lenovo 3 at 3.5w TDP? perf/watt seems very good. performance does not seem good vs the reference designs.

http://tabtec.com/windows/lenovo-thinkpad-helix-2-intel-core-m-now-available-us-979/

this one has an aluminium back so maybe less throttling.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Sunspider is a short benchmark, favouring Core-M and its Turbo. The drop comes in longer benchmarks.

But that only holds as a disadvantage if the iPad Air doesn't throttle (or considerably less).
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
This will be a fine 12 core server cpu and a nice fast 10-18w tdp ultrabook.

Aparently 14nm is not mature and the arch is not fit for eg 5W tdp technically but so what?
And then comes margins at this market. There is tons of other possibilities here. It just doesnt make any sense imo. Even if it was 50% faster it wouldnt make a difference. Its a shame to use it here and Intel should just take its time and not rush products just because its mobile.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136

witeken

Diamond Member
Dec 25, 2013
3,899
193
106

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Finally, combined CPU and GPU stress wreaks predictable havoc on the machine, with CPU clock rates of around 500-600 MHz and the GPU reaching only around 300 MHz.

That is awful. CPU clocks at 1/5 of maximum and GPU clocks at 1/3 of maximum. It is even worse, when you consider this thing has a fan.

In addition the SoC gets destroyed by Tegra K1 in 3dMark.
And 6h battery life while browsing is on the lower end of the spectrum.

If Intel continues this way it will be overrun by the ARM competition in the not too distant future. Chances in mobile space are slim either.

Cortex A-57 is available in actual designs. And that is just run-off-the-mill fully synthesizable ARM IP. With other words, the ARM competition is not even trying.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
That is awful. CPU clocks at 1/5 of maximum and GPU clocks at 1/3 of maximum. It is even worse, when you consider this thing has a fan.

In addition the SoC gets destroyed by Tegra K1 in 3dMark.
And 6h battery life while browsing is on the lower end of the spectrum.

If Intel continues this way it will be overrun by the ARM competition in the not too distant future. Chances in mobile space are slim either.

Cortex A-57 is available in actual designs. And that is just run-off-the-mill fully synthesizable ARM IP. With other words, the ARM competition is not even trying.

You know what the strange part is? If ARM chips like the S800 were given a load comparable to Furmark + Prime you would see similar behaviour.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
It still feels strange my old Samsung 900x3c ultrabook with ib is a bit faster / the same especially under high stress. I know priorities is for more battery life but still...my old machine is 1175g light as i recall. I dont really feel an urge to upgrade to say the least. Its meh and i dont need more battery life than i got already. But battery life is surely more important than performance for most but man...it hurts. Its not like its outright fast machines imo. Its okey-good performance for office work but not more.
 

III-V

Senior member
Oct 12, 2014
678
1
41
That is awful. CPU clocks at 1/5 of maximum and GPU clocks at 1/3 of maximum. It is even worse, when you consider this thing has a fan.

In addition the SoC gets destroyed by Tegra K1 in 3dMark.
And 6h battery life while browsing is on the lower end of the spectrum.

If Intel continues this way it will be overrun by the ARM competition in the not too distant future. Chances in mobile space are slim either.

Cortex A-57 is available in actual designs. And that is just run-off-the-mill fully synthesizable ARM IP. With other words, the ARM competition is not even trying.
It's throttling because of artificially low power limitations set by Lenovo, not because of heat. The fan is largely irrelevant.
 
Aug 11, 2008
10,451
642
126
Even the point was made in the article that normal operation was quick enough. For its intended use, throttling was not a problem. I have said it before, but I still think people are expecting way too much performance for such a low watt chip. The problem I see is that despite the low TDP, the battery life is not exceptional, and the price is very high of course. Personally it does not appeal to me, but for a business traveller doing primarily e-mail, office use, internet and light productivity and who wants a sleek, "impressive" package with the company paying for it, it could have a market.
 

dahorns

Senior member
Sep 13, 2013
550
83
91
Even the point was made in the article that normal operation was quick enough. For its intended use, throttling was not a problem. I have said it before, but I still think people are expecting way too much performance for such a low watt chip. The problem I see is that despite the low TDP, the battery life is not exceptional, and the price is very high of course. Personally it does not appeal to me, but for a business traveller doing primarily e-mail, office use, internet and light productivity and who wants a sleek, "impressive" package with the company paying for it, it could have a market.

Yeah, the idle battery usage is way too high on these machines. That is almost certainly affecting battery life more than anything else. Compare it to the MacBook air (running Haswell u), which is fraction of the idle use. Is this a limitation of Windows or lazy design decisions by OEMs?
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
It's throttling because of artificially low power limitations set by Lenovo, not because of heat. The fan is largely irrelevant.

This argument does not fly very far. The device was measured 42 degree at the hot spot. How hot, according to your opinion, should have Lenovo allowed the device to be?
Besides, independent how bad the fan is, it is supposed to drag the heat away from the hot spot. So it is safe to assume that it would have been even hotter than 42 degree without fan. By how much is, obviously, unknown.

Yeah, the idle battery usage is way too high on these machines. That is almost certainly affecting battery life more than anything else. Compare it to the MacBook air (running Haswell u), which is fraction of the idle use. Is this a limitation of Windows or lazy design decisions by OEMs?
For the idle use case this is expected at 14nm. It will get worse with 10nm. There is not much you can do about it, aside from using smaller cores. That is the base idea behind the big.little concept as featured by many ARM designs. (see new Exynos using A57/A53)
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
14nm transistors are already the best transistors in the world in all aspects. So how do you mean it will only get worse at 10nm when they are going to have III-V compound semiconductor fins? It's really annoying when people blame the silicon for a device' battery life, while there are measurements that show that the screen uses so much more energy. Also, people like to point out Apple's supremacy in the ARM SoC market, yet they don't use big+little.
 

III-V

Senior member
Oct 12, 2014
678
1
41
This argument does not fly very far. The device was measured 42 degree at the hot spot. How hot, according to your opinion, should have Lenovo allowed the device to be?
Lenovo shouldn't have let this product see the light of day, to be honest. It regresses compared to last year's model in other areas as well.
 
Last edited:

Thala

Golden Member
Nov 12, 2014
1,355
653
136
14nm transistors are already the best transistors in the world in all aspects. So how do you mean it will only get worse at 10nm when they are going to have III-V compound semiconductor fins?

Oh dear, please do not comment on topics, you are apparently no expert of.
Let me first state that when idle, leakage is the single biggest contributor to power. In other use cases active power is (still) dominating. The relation between these to values however goes up with each smaller process node in favor of leakage. What you also should know is, that leakage not just doubles when moving down half a node. Leakage is one of the main effects working against Moores law.

The main reason to going to 3d transistors/FinFETs is to have more control of the geometric layout of the gate channel and the electric field controlling the channel, thus reducing leakage current significantly.
To give you some idea. 22nm FinFET has much lower leakage than 28nm planar. I estimate that 14 nm FinFET is about the same as 28nm planar for logic but already worse for SRAM. 14nm FinFET is in any case worse then 22nm FinFET, because you cannot defeat physics.

At this point you need to think about clever designs, as for instance the big.little concept. Keep in mind Intel is already past the general FinFET/3-gate gain with respect to leakage.

Mind to enlighten me, how is this supposed to improve when going down to 10nm, even if you consider that you are using compound materials? I mean at some point you also need to reduce the thickness of the gate dielectric, where high-k materials are already used to extend Moores law.
 

III-V

Senior member
Oct 12, 2014
678
1
41
I mean at some point you also need to reduce the thickness of the gate dielectric, where high-k materials are already used to extend Moores law.
Has an ultra high-k candidate been found yet?

Also, I have to say you are being a bit too doom and gloom about idle power. Power gating exists for a reason.
 

oobydoobydoo

Senior member
Nov 14, 2014
261
0
0
Oh dear, please do not comment on topics, you are apparently no expert of.
Let me first state that when idle, leakage is the single biggest contributor to power. In other use cases active power is (still) dominating. The relation between these to values however goes up with each smaller process node in favor of leakage. What you also should know is, that leakage not just doubles when moving down half a node. Leakage is one of the main effects working against Moores law.

The main reason to going to 3d transistors/FinFETs is to have more control of the geometric layout of the gate channel and the electric field controlling the channel, thus reducing leakage current significantly.
To give you some idea. 22nm FinFET has much lower leakage than 28nm planar. I estimate that 14 nm FinFET is about the same as 28nm planar for logic but already worse for SRAM. 14nm FinFET is in any case worse then 22nm FinFET, because you cannot defeat physics.

At this point you need to think about clever designs, as for instance the big.little concept. Keep in mind Intel is already past the general FinFET/3-gate gain with respect to leakage.

Mind to enlighten me, how is this supposed to improve when going down to 10nm, even if you consider that you are using compound materials? I mean at some point you also need to reduce the thickness of the gate dielectric, where high-k materials are already used to extend Moores law.

I am not an expert at all but this is very interesting to me and I was wondering if you could answer a question: What lithography would you say is best for leakage? You said that 22nm FF was better than 14nm FF. Where does TSMC 20nm planar and Samsung 20nm FF Exynos 5433 fit in there? Better, worse?


From what I can gather, big core designs will run into problems below 14nm? It seems Apple has taken the big core approach with A8X and samsung is going big.LITTLE, I wonder if the smaller cores can give samsung an advantage when leakage becomes an issue. Would you say Samsung has the right approach with big.LITTLE vs big core?
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I was under the impression that while leakage with smaller nodes gets higher in relation to active power in absolute values it gets lower.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Oh dear, please do not comment on topics, you are apparently no expert of.
Let me first state that when idle, leakage is the single biggest contributor to power. In other use cases active power is (still) dominating. The relation between these to values however goes up with each smaller process node in favor of leakage. What you also should know is, that leakage not just doubles when moving down half a node. Leakage is one of the main effects working against Moores law.

The main reason to going to 3d transistors/FinFETs is to have more control of the geometric layout of the gate channel and the electric field controlling the channel, thus reducing leakage current significantly.
To give you some idea. 22nm FinFET has much lower leakage than 28nm planar. I estimate that 14 nm FinFET is about the same as 28nm planar for logic but already worse for SRAM. 14nm FinFET is in any case worse then 22nm FinFET, because you cannot defeat physics.

At this point you need to think about clever designs, as for instance the big.little concept. Keep in mind Intel is already past the general FinFET/3-gate gain with respect to leakage.

Mind to enlighten me, how is this supposed to improve when going down to 10nm, even if you consider that you are using compound materials? I mean at some point you also need to reduce the thickness of the gate dielectric, where high-k materials are already used to extend Moores law.

First things first, I'd appreciate it if you refrained from the use of ad hominem. 14nm isn't just a shrink of 22nm. Intel made considerable changes to the fins to improve them. This is what Intel says about the issue:
14nmLeakage.png

BDW-14nm.png


It seems Intel has indeed defeated common leakage wisdom with 14nm and surpassed expectations. Intel is probably biased, though, so let's turn to some objective analysis, done by Idontcare (for 22nm).

StaticPowerConsumptionTempversusPower.png

Source: Deeper Analysis of Static Power Consumption (Leakage)

I also found this:

eetimes_pics_fig_4.jpg

Source: http://www.eetimes.com/author.asp?section_id=36&doc_id=1265998

Leakage has been reduced, but dynamic power consumption has been reduced more:
DynamicvsStaticPowerConsumption.png~original


Intel_technology_roadmap.gif

Source: http://maltiel-consulting.com/Integrating_high-k_Metal_Gate_first_or_last_maltiel_semiconductor.html

StaticversusDynamicfor2600kan3770k.png~original


CPUPowerConsumption.png~original


Idontcare's conclusion:
What amazes me, and this is the message I hope people absorb in reading this, what amazes me is that Intel was able to shrink the physical geometry of the the circuits themselves in going from 32nm to 22nm (xtor density goes up) and yet they managed to essentially keep the static leakage the same (roughly) at any given temperature and/or voltage as the much less dense (and less likely to leak) 32nm circuits.

What amazes me is that it seems that Intel has greatly reduced leakage at 14nm while shrinking die size in half. I haven't said that 10nm will greatly reduce it again, since III-V is more about improving on current, but in any case I don't envision that it will rise, and even if it would, the hugely reduced dynamic power that compound semiconductors offer should more than make up for any increase. At ~5nm, Intel will probably go with GAA.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I am not an expert at all but this is very interesting to me and I was wondering if you could answer a question: What lithography would you say is best for leakage? You said that 22nm FF was better than 14nm FF. Where does TSMC 20nm planar and Samsung 20nm FF Exynos 5433 fit in there? Better, worse?


From what I can gather, big core designs will run into problems below 14nm? It seems Apple has taken the big core approach with A8X and samsung is going big.LITTLE, I wonder if the smaller cores can give samsung an advantage when leakage becomes an issue. Would you say Samsung has the right approach with big.LITTLE vs big core?

I'm not sure about 20nm, but it would surprise me a bit if has risen (considerably) since those engineers get paid for improving things, but it's certainly not impossible; 20nm really doesn't reduce power that much. But as you can see with TSMC's FinFET pull-in, FinFET is mandatory at these densities, so 16nm will only improve the 20nm process (see also my post above).

I don't think leakage is too much of an issue that it impacts the choice of regular design vs big+little. I'm not convinced that big+little gives much of a benefit it any. If you don't have a big R&D budget, sure it might be cheaper to develop a mediocre big core and a mediocre little core, but SoCs like Apple's A series and Intel's Atom to me prove that a good core does not need a companion core.
 
Last edited: