Intel Skylake / Kaby Lake

Page 253 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Aug 11, 2008
10,451
642
126
Guess you want people to lose their jobs (bad corporate financial performances -> workforce reductions) and for people to see their 401k retirement accounts crash...all over a $1500 halo CPU that nobody is forcing you to buy and dumb overclocking features because you're apparently too cheap to spend the extra $20 to buy a z170 board.

What is wrong with you?

I of course disagree with his wish for a crash, and I think perhaps he was only speaking metaphorically anyway. But I do agree with him about the arrogance of some of the tech giants. And IMO, Intel is far from the worst. I mean, you are right who cares if they sell a 1500.00 cpu? It you think it is overpriced, simply dont buy it. There are plenty of cheaper alternatives from Intel itself and also from AMD, and android even.

But I would put MS (I used to actually defend them a lot) and google very high on the list. Both of these are intimately intertwined in our everyday life.
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
Efficency is largely 90%, it s not by chance that multiphases PSUs are used in MBs, power losses are actualy low because the current is distributed between as much mosfets.

At 90W in the CPU and a 90% effiency this let 10W in the VRMs and attached components including the MB copper trace, that s considerable power to dissipate, so i would say that anything below 90% efficency at the CPU supply level is bad design.

FTR i measured a few laptops PSUs, best one is a 70W one wich has 94-95% effiency at 65W output power, but there was nothing that was below 90%, and that s not multiphases designs.

Edit : Hfr power and overlocking tests are done on the 12 phase MB Asus Z87 pro....

It appears you don't actually know much of what you're talking about, though you sure do see certain of it. I'm pretty aware of current sharing in VRM; I've designed, built and tested several large (several hundred watt) multiphase synchronous buck converters. Efficiency is a tough thing to measure with live loads, and while an efficiency of 90% from 12V cable in to low voltage on package through the socket is certainly possibly with the right conditions, saying it's average is silly. The Z87 Pro still uses a driver and discrete dual mosfets BTW, which as a rule will be a little less efficient than an integrated package.

I'm not sure why you keep on bring up AC/DC power supplies BTW. They're completely different topologies and are not directly comparable. Tossing ATX PSU efficiencies into a discussion on low voltage DC/DC conversion is as silly as proclaiming as absolute proof that Vendor A's transistors are 20% less efficient than Vendor B's because their CPU runs at 20% higher voltage, while ignoring every other aspect of the CPU design or the process itself.

BTW - If you're measuring 95% efficiency on your laptop power brick, I suggest you revisit your methodology and test equipment.

Edit: While this is old, check out figure 5 in this pdf
http://www.irf.com/technical-info/whitepaper/pswus03vrmdesign.pdf
Notice the efficiency into the socket is considerably worse than it is at the output of the VRM.
 
Last edited:
Mar 10, 2006
11,715
2,012
126
It appears you don't actually know much of what you're talking about, though you sure do see certain of it. I'm pretty aware of current sharing in VRM; I've designed, built and tested several large (several hundred watt) multiphase synchronous buck converters. Efficiency is a tough thing to measure with live loads, and while an efficiency of 90% from 12V cable in to low voltage on package through the socket is certainly possibly, saying it's average is silly. The Z87 Pro still uses a driver and discrete dual mosfets BTW, which as a rule will be a little less efficient than an integrated package.

I'm not sure why you keep on bring up AC/DC power supplies BTW. They're completely different topologies and are not directly comparable. Tossing ATX PSU efficiencies into a discussion on low voltage DC/DC conversion is as silly as proclaiming as absolute proof that Vendor A's transistors are 20% less efficient than Vendor B's because their CPU runs at 20% higher voltage, while ignoring every other aspect of the CPU design or the process itself.

BTW - If you're measuring 95% efficiency on your laptop power brick, I suggest you revisit your methodology and test equipment.

Good to see that certain posters' snow-jobs don't work on people who know what they're talking about.
 
Mar 10, 2006
11,715
2,012
126
A market crash would only worsen the competitive state of things (short term)... a lot of the smaller guys will go splat, while the larger ones will have economies of scale, assets, and cash reserves to weather any storm.

The smaller ones will have to cut back R&D and lay people off in droves to survive while the larger companies will be able to continue to invest and once the "recession" recedes (ha...), the big guys who could invest will be stronger than ever.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Guess you want people to lose their jobs (bad corporate financial performances -> workforce reductions)

Not to be callous, but people maybe losing their jobs is the worst reason ever for the tech industry to become complacent. The tech industry is one of the best examples of meritocracy in worldwide system of capitalism for a reason- it constantly creates winners of those who innovate and losers out of those who fall behind.

People don't get into technology careers (or careers at technology companies) to mitigate personal risk, if that was their goal they would have some government job. People get into technology careers for the potential upside of wages that greatly outpace the national average and with that comes a degree of risk if you hitch yourself to the wrong wagon.

The very talented people in technology can always get a job elsewhere, maybe making more exciting products than their old company produced. When that happens we all benefit, even if some less talented eggs get broken making that particular omelet.

Just the rise of all the new ARM vendors the last few years shows that any threat of a technology oligopoly is malarkey. If anything we have the most competitive technology market ever due to the decline of the influence of Wintel and the desktop.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
3,797
260
126
The smaller ones will have to cut back R&D and lay people off in droves to survive while the larger companies will be able to continue to invest and once the "recession" recedes (ha...), the big guys who could invest will be stronger than ever.

Why would bigger companies cut back less on R&D in a recession (in relation to number of employees, market cap, or whatever you use to define "company size")?

And you do know "big company" != "big profit/revenue in relation to market cap", right?
 
Mar 10, 2006
11,715
2,012
126
Why would bigger companies cut back less on R&D in a recession (in relation to number of employees, market cap, or whatever you use to define "company size")?

And you do know "big company" != "big profit/revenue in relation to market cap", right?

In a recession, rich and very profitable companies simply become less profitable, but can generally sustain their investments without any risk of going under.

Companies on the brink that see large declines in revenue may have to cut back R&D/marketing/capex simply to survive.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,797
260
126
In a recession, rich and very profitable companies simply become less profitable, but can generally sustain their investments without any risk of going under.

Companies on the brink that see large declines in revenue may have to cut back R&D/marketing/capex simply to survive.

I agree with parts of that. But company size does not determine profitability, nor cash reserve in relation to company size.

A big company that is not profitable is just as likely to cut back on R&D as a small non-profitable company. So company size has nothing to do with it, profitability has though (and cash reserve in relation to company size).
 
Last edited:

386DX

Member
Feb 11, 2010
197
0
0
I'm starting to hope for a big market crash. The major players in the tech industry are becoming far too arrogant: Intel's attempts to crack down on BCLK OC plus upcoming price hikes to $1500 on HEDT, Microsoft's incessant attempts to jam Win10 (spyware) down our throats plus the retroactive OneDrive quotas, Google's pioneering of new ways to invade our privacy... I want to see the whole house of cards come tumbling down.

Did you even read the article? Intel isn't cracking down on BCLK OC they are just asking the mother board manufacturers to not enable that feature on non Z170 boards as BCLK OC on those boards disable certain chip features the big important one is C-State. This is going to lead to idiots using non Z170 boards with over clocked systems idling at 300W+ as the CPU can't idle with C-State disabled. All because they didn't spend the extra $20 to get a Z170 board. I don't think it's a unreasonable request from Intel despite your tin foil hat theories.
 
Mar 10, 2006
11,715
2,012
126
I agree with parts of that. But company size does not determine profitability, nor cash reserve in relation to company size.

A big company that is not profitable is just as likely to cut back on R&D as a small non-profitable company. So company size has nothing to do with it, profitability has though (and cash reserve in relation to company size).

When I talk about "big" companies I talk about companies with large revenue & correspondingly large profit. Usually such companies command large market caps as market cap is given by f(k) = k*net income, where k is some multiple that depends on market participants' collective sentiment around the company's future prospects, i.e. earnings multiple.

Low growth/shrinking companies (like Intel) get small multiples while fast growing companies get big ones.

Intel with $55.4B in revenue and $11B+ in net income is a "large" company. AMD with <$4B in revenue and bleeding money is a "small" company in my book.

Anyway, if you took a company like Intel and cut its revenue in half tomorrow & gross profit margins plunged to ~50% (doomsday case), it would be ever-so-slightly profitable. This means that it could afford to sustain its current level of investment without bleeding cash.

If you cut AMD's revenue by 50%, the company would go bankrupt within a year w/o serious operating expense cutbacks.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,797
260
126
When I talk about "big" companies I talk about companies with large revenue & correspondingly large profit.
Revenue != Profit. Company A can have much higher revenue than Company B, yet company B can have higher profit.
Usually such companies command large market caps as market cap is given by f(k) = k*net income, where k is some multiple that depends on market participants' collective sentiment around the company's future prospects, i.e. earnings multiple.

Low growth/shrinking companies (like Intel) get small multiples while fast growing companies get big ones.

Intel with $55.4B in revenue and $11B+ in net income is a "large" company. AMD with <$4B in revenue and bleeding money is a "small" company in my book.

Anyway, if you took a company like Intel and cut its revenue in half tomorrow & gross profit margins plunged to ~50% (doomsday case), it would be ever-so-slightly profitable. This means that it could afford to sustain its current level of investment without bleeding cash.

If you cut AMD's revenue by 50%, the company would go bankrupt within a year w/o serious operating expense cutbacks.

What's important is of course the R&D reduction relative to the size of the company.

A small profitable company with good cash reserves relative to size will reduce relative R&D less than a big company that is not profitable and has low cash reserves relative to its size.

I.e. company size does not matter. What matters is profitability and relative cash reserves.
 
Last edited:

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
There's lots of criticism of Intel graphics, but let's give credit where credit's due:

- 2011

On average, the A8-3500M is 50% faster than HD 3000 at Low settings; move up to our Medium settings and Llano is 76% faster on average, with leads in every title ranging from 36% (StarCraft II is again the worst showing for AMD) to as much as 204% (Civilization V).

- 2012

We found that across the same selection of 15 titles, Ivy Bridge and Llano actually ended up “tied”—Intel led in some games, AMD in others, but on average the two IGPs offered similar performance.

...Overall, it's a 20% lead for Trinity vs. quad-core Ivy Bridge.

2013/2014 Richland and Kaveri comparisons were not at the same/similar TDP, AnandTech used Intel ULV/ULT vs 35W AMD APUs.

- 2015 (excluding Skylake-U GT3e)

NotebookCheck just tested the full blown Pro A12-8800B APU (fastest Carrizo, 512 SPs @ 800MHz - FX-8800P equivalent) with dual-channel RAM. Core i5-6200U is one of the most popular Skylake-U models, based on regular HD Graphics 520 (GT2, not fancy GT3e with eDRAM).

In the games where both chips were pitted against each other (19 games total), Core i5-6200U was faster in 10 games while A12-8800B was faster in 9 games at 1366x768 using medium quality settings. You were right mikk.

Also worth noting, HD 520 (Skylake-U) is 35-36% faster than HD 5500 (Broadwell-U) @ Tomb Raider, Bioshock Infinite and Middle Earth: Shadow of Mordor.

BTW, Samsung is testing Kabylake-U GT2 right now.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
So we have to give credit to Intel because their competitor's iGPs were bandwith starved for a whole 4 years AND 1.5 nodes behind the whole time? Congrats to them!
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
So we have to give credit to Intel because their competitor's iGPs were bandwith starved for a whole 4 years AND 1.5 nodes behind the whole time? Congrats to them!

Your post highlights there's very few reasons to buy an AMD-based laptop right now. And nice observation, Intel closed the huge gap without adding eDRAM to their whole lineup.
 
Last edited:

nerp

Diamond Member
Dec 31, 2005
9,866
105
106
Talk about profit/loss/R&D. Interesting points raised here but no reference to shareholders, who ultimately have tremendous sway.
 

DrMrLordX

Lifer
Apr 27, 2000
21,644
10,865
136
So we have to give credit to Intel because their competitor's iGPs were bandwith starved for a whole 4 years AND 1.5 nodes behind the whole time? Congrats to them!

We also have to give credit to Intel for Carrizo being cTDP-limited on just about every laptop except one Toshiba model that has inferior VRMs. That "35W" AMD APU was probably stuck at 15-22W for the entirety of the test.
 
Aug 11, 2008
10,451
642
126
Well, the performance "is what it is". Does it really matter *why* a particular chip is being held back? Seems like we have one excuse after another for AMD's poor performance: the chip is old, it is x process nodes behind intel, it is TDP limited, OEMs are gimping it, and on and on and on. So what? It is the responsibility of a company to put out a competitive product, including updating it, using appropriate materials, and ensuring the devices it is placed into are properly designed. Obviously, I like to study and discuss cpus, but ultimately, I dont really care why product A is faster than product B, it is the final performance that counts.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
We also have to give credit to Intel for Carrizo being cTDP-limited on just about every laptop except one Toshiba model that has inferior VRMs. That "35W" AMD APU was probably stuck at 15-22W for the entirety of the test.

So you'd like to compare 35-42W Carrizo to 15W Intel ULT so that AMD looks better? Try 28W Skylake-U GT3e instead. It's not Intel's fault that in a world of thin'n'light laptops/convertibles OEMs don't see the value proposition of AMD's chips set to a higher cTDP. I would also like to see more Iris notebooks instead of Intel ULT + low-end dGPUs but that's what you will usually find.
 
Last edited:

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
We could try to compare that with Intel's 35-45w HD 530 though.

Those are quad-cores so Carrizo would get massacred in CPU performance, despite faster graphics than non-Iris iGPUs at that TDP. I see it as an ULT competitor.
 
Last edited:

Rngwn

Member
Dec 17, 2015
143
24
36
Those are quad-cores so Carrizo would get massacred in CPU performance, despite faster graphics than non-Iris iGPUs at that TDP. I see it as an ULT competitor.

Well, there is a 35w dual-core core i3 6100H. That could make a good matchup.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
Quick look at Intel SDE 7.39:

Emulation support for the additional Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions present on some future Intel® Xeon® processors scheduled to be introduced after Knights Landing.

Emulation support for the Intel® Secure Hash Algorithm (Intel® SHA) extensions present on the Intel Goldmont microarchtiecture.

Emulation support for the Intel® Memory Protection Extensions (Intel® MPX) present on the Intel Skylake microarchitecture and Intel Goldmont microarchitecture.

Cannonlake 6066x supports PCOMMIT and CLWB instructions (Persistent memory support in x86)

Skylake Xeon 5065x and Cannonlake 6066x supports PKU & OSPKE (RDPKRU & WRPKRU instruktions)

https://software.intel.com/en-us/articles/intel-software-development-emulator
 
Aug 11, 2008
10,451
642
126
The way I read that, it seems to me the 16gb memory is included in the 100.00 extra from the base price. In any case, way, way too rich for my blood, even for the base model. The upgrade for iris pro is reasonable. But again, at that price, one could easily argue that iris pro and the better screen should be standard.