I don't know what to expect from "significant improvement in both application as well as graphics performance" though. Lately that has meant anything from 5-50%...Intel CEO Brian Krzanich today confirmed that new Intel Core M chips based on the Skylake processor microarchitecture will be arriving in the second half of the year.
[...]
The Skylake based Core M chips are expected to bring a significant improvement in both application as well as graphics performance. Currently the Core M chips are seen to be weaker performers than the earlier Haswell ULV chips that powered ultrabooks and tablets.
[...]
Other major feature of the upcoming Skylake Core M processors will be support for second generation of Intels RealSense 3D camera technology. The technology allows people to interact with their notebook or tablet using gestures and facial recognition.
Take performance claims with a grain of salt. They just say "are expected", so it's just speculation.
We already knew SKL-Y would be coming. We've known for a long time now that it's 4W.
I think SKL-Y could get a better reputation, since now people are comparing against HSW or even BDW-U. It will be interesting to see how SKL performance compares to BDW-Y, since it will apparently lack 2nd gen FIVR, and it will have lower 4W TDP. CNL-Y will be even more mature Core M and then it should really become a good option (certainly if it goes to lower price points, as I expect to happen).
Broadwell is really short-lived. Given that BDW wasn't really groundbreaking this is understandable.
Sorry, I really want x86 to compete well with ARM in mobile, but wasn't Broadwell supposed to be the magic bullet, and now we are speculating out two more generations, and saying, well maybe then?
Intel needed to take advantage of their process lead right now and strike a serious blow against ARM. Instead we got a delayed, underwhelming product, both in core and atom lines. This just allows ARM to get even more firmly entrenched and improve their performance as well.
Sorry, I really want x86 to compete well with ARM in mobile, but wasn't Broadwell supposed to be the magic bullet, and now we are speculating out two more generations, and saying, well maybe then?
Intel needed to take advantage of their process lead right now and strike a serious blow against ARM. Instead we got a delayed, underwhelming product, both in core and atom lines. This just allows ARM to get even more firmly entrenched and improve their performance as well.
BDW is actually quite a major overhaul of HSW, with lots of iterative improvements across the board.Ayup, the only 'groundbreaking' point with respect to Broadwell would be the 14nm process... which is also kinda the reason why none of the rest of it is really groundbreaking.
ARM cant compete performance wise with big cores at all.
And in that segment the battle is Atoms. It never was to be Broadwell.
ARM already lost close to 25% of the tablet segment.
I'm sure an ARM licensee could build a very fast "big core". Just as X86 isn't inherently unsuitable for low power, ARM isn't unsuitable for high performance![]()
ARM cant compete performance wise with big cores at all.
And in that segment the battle is Atoms. It never was to be Broadwell.
ARM already lost close to 25% of the tablet segment.
Saying "iterative improvements" and "major overhaul" are rather contradictory statements.BDW is actually quite a major overhaul of HSW, with lots of iterative improvements across the board.
http://files.shareholder.com/downlo...6-4AB7-B35F-BD98AB44B43F/Intel_14nm_Aug11.pdf
So what have we gotten so far? A MIA product, and based on broadwell results, I am expecting far less improvement than was touted. Where is all the power saving of 14nm? Where is the great graphical improvement from Gen 8?
I won't go crazy like a certain poster here and try to prove that 14nm is a regression. But Intel needed it to be a big step forward in gpu and power savings, and instead it has seemed so far to be just another incremental improvement.
We use the stress test to examine the inner temperature development in an extreme scenario. The CPU and GPU are loaded via Prime95 and FurMark, and they are observed for one hour. It is remarkable that the CPU's clock rate never dropped to below 2500 MHz, and thus it remained even over the base speed. The CPU immediately clocked up to its maximum of 2.7 GHz directly after the stress test and achieved the same 3DMark 11 scores as in a cold start.
I was talking about phones primarily, and tablets. And yes that is where intel needed 14nm Cherry Trail to be on time and knock it out of the park on performance per watt. So what have we gotten so far? A MIA product, and based on broadwell results, I am expecting far less improvement than was touted. Where is all the power saving of 14nm? Where is the great graphical improvement from Gen 8?
I won't go crazy like a certain poster here and try to prove that 14nm is a regression. But Intel needed it to be a big step forward in gpu and power savings, and instead it has seemed so far to be just another incremental improvement.
And yes, ARM lost tablet share, but lets see if Intel can maintain that foothold and how rapidly they can decrease contra revenue. Again, I cant see all the delays helping here either.
As I have said before, all the problems with 14nm could not have possibly come at a worse time for intel.
Wow, Broadwell-U looks pretty amazing in that review.14nm is pretty good.
http://www.notebookcheck.net/Acer-Aspire-V3-371-Notebook-Review.135831.0.html
2.5 ghz CPU + 900 mhz igp at ~18W package power is amazing compared to haswell-U. Broadwell-U can load max turbo SOLID under the most strenuous loads (package power is a little higher than 15W but I'm not sure how accurate HWinfo is here with graphics power at 0.8W).
Its even better when you see the total notebook power consumption is barely over 30W.
Saying "iterative improvements" and "major overhaul" are rather contradictory statements.
No, it's like evolution. Lots of micro- eventually results in macroevolution.
*Improved density by 2x to 2.2x
*14nm power decrease and performance increase for much improved performance per watt
*Slightly improved BDW (micro)architecture
*Gen 8 architecture with more cache, more shaders and higher sampler throughput
*HSA, DCC, media improvements, H.256
*3DL, 2nd gen FIVR
*Reduced PCB footprint
*Improved PCH
*14nm power decrease and performance increase for much improved performance per watt
*HSA, DCC, media improvements, H.256
Broadwell’s video decode capabilities will also be increasing compared to Haswell. On top of Intel’s existing codec support, Broadwell will be implementing a hybrid H.265 decoder, allowing Broadwell to decode the next-generation video codec in hardware, but not with the same degree of power efficiency as H.264 today. In this hybrid setup Intel will be utilizing both portions of their fixed function video decoder and executing decoding steps on their shaders in order to offer complete H.265 decoding. The use of the shaders for part of the decoding process is less power efficient than doing everything in fixed function hardware but it’s better than the even less optimal CPU.
I think the phenomenon you are describing is really just architectural/product planning failures on Intel's part re: Cherry Trail for tablets.
Intel wildly underestimated what the ARMy would bring to bear in terms of CPU/GPU performance, IMO.
To be fair, if Cherry Trail had arrived on schedule it would probably have been pretty competitive. But the delays mean that their competitors are now are year ahead of where they would have been in the original plan.
The more reviews come in, the stranger this Broadwell vs. Haswell comparison becomes: how is it that despite having lower idle power consumption and seemingly being able to sustain higher turbo clocks at same TDP, Broadwell and Haswell equipped units have the same stamina while browsing the Internet?14nm is pretty good.
2.5 ghz CPU + 900 mhz igp at ~18W package power is amazing compared to haswell-U. Broadwell-U can load max turbo SOLID under the most strenuous loads (package power is a little higher than 15W but I'm not sure how accurate HWinfo is here with graphics power at 0.8W).
Its even better when you see the total notebook power consumption is barely over 30W.
That migt be a factor.
But when i think 2 years back i bought a nook hd+ tablet for one of the kids with a 1.5ghz ti 4470 core 2x arm A9. When released nearly 3 years ago that cpu was the fastest on the arm market.
Now we got 14nm samsumg quad A57.
The improvement in 3 years is insane. Sweepr shows that brilliantly.
We tend to judge Intel compared to that and that makes the improvement seem small.
But the situation is also quite different. Instead of competing against amd intel now face samsung and apple.
The difference here is even far far greater than what the performance differences indicate.
Still i think its relevant to judge eg bw perf on its own merits. And yeaa its not something that makes me personally buy a new ultrabook but for new owners its an small improvement that is kind of free so thats just nice. We have to remember the level was very high before.
