• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Intel Skylake / Kaby Lake

Page 59 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The eDRAM cost is very small. It was estimated to be 3$.

You mean production cost? But that does not include R&D costs etc that Intel also has to get ROI on.

If you only take the pure production cost of a 4790K die, I bet it's not that high either (compared to the sales price).
 
Last edited:
You mean production cost? But that does not include R&D costs etc that Intel also has to get ROI on.

If you only take the pure production cost of a 4790K die, I bet it's not that high either (compared to the sales price).

Memory is trivial. Hence why you can buy it in large amounts for a small price.

Remember even 3$ is 24$ per gigabyte. Multiple times above regular memory prices. I dont even think HBM cost that much.
 
Last edited:
Sysoft is right were Skylake should be, that Cinebench score is worrisome instead.
I hope it's just wrong and there is "some" improvement clock/clock over Haswell because better graphics (and power consumption?) aren't exactly my priority now...

Also puts a perspective on skylake-e, if all the improvement is going into the igp.. then skylake-e is kinda stillborn unless you're into an inverse sexy-time relationship with your utility bill.
 
Just like Xeon-D came out of nowhere and killed any hope ARM would have in servers I'm pretty sure Intel will react if AMD delivers next year (after their failures I don't buy the hype though). Mainstream 6C+GT2 @ 3.5GHz would be probably be small and easy to do at a mature 14nm process. Another immediate solution would be lowering the price of Broadwell-E hexa-cores to LGA115x Core i7 levels ($330), and octo-cores down to $600. They are playing safe right now because they can.

Yup, and they've got Xeons that they can pull in and unlock for the higher price points.
 
I'm assuming at some point Intel will put the PCH ondie for all mainstream, possibly on After Cannonlake but it could be after that. Would take up a bit of space depending on features obviously. But it still seems unlikely to see more than 4 any time soon.
 
I'm assuming at some point Intel will put the PCH ondie for all mainstream, possibly on After Cannonlake but it could be after that. Would take up a bit of space depending on features obviously. But it still seems unlikely to see more than 4 any time soon.

PCH on-die would be awesome for Ultrabooks/2-in-1s/tablets. Not sure if it's all that valuable in higher-perf notebooks/desktops.
 
Unfortunately that's the boat I'm in with my overclocked i7-2600K. I was initially estimating an average improvement in the 30%-40% at the same clock speed but it doesn't look like that will happen if these benches are accurate. True, in certain tasks, it will be much faster than the 2600K but those tasks probably aren't important enough to me at this time to upgrade. That likely means I'll be using my 2600K for another year. I love it, but I am itching for new toys and an upgrade. 🙂


ditto. 🙂
 
The eDRAM cache offers a performance improvement bigger than a new arch, or alternatively power savings equal to a new node, and we consider it expensive?

Yea, because only parts with eDRAM will be faster. Costs for new architecture will be spent once and its something they do anyway(R&D). With eDRAM its persistent cost with every part they sell. Its not like eDRAM speeds up many CPU applications anyway. The fact is there is also packaging complexity.

I think that's Broadwell 2C+GT3 die size, maybe CPU-Z is reading Skylake wrong.
That's GPU-Z. Also GPU-Z is consistently wrong with specs of Intel GPUs. GT2 Broadwell has 3 Samplers, with 4 TMUs each, making it 12. That screenshot is also assuming it runs at 350MHz with 16 TMUs, but that's for Haswell. Broadwell has 24. It also says Broadwell GT3 in one part and Skylake GT2 for the name.

Sysoft is right were Skylake should be, that Cinebench score is worrisome instead.
I hope it's just wrong and there is "some" improvement clock/clock over Haswell because better graphics (and power consumption?) aren't exactly my priority now...
Sandra isn't better either. If you ignore memory BW and cryptography benchmark, also FP, and look at ALU benchmark the gain is ~7%. The one which gets 21.5% improvement is likely using the GPU. PCMark 8 is boosted because it includes a graphics portion, and if you look at the original link, here's the shocker:

Skylake is barely 5% faster, and is sometimes slower.
Firestrike Physics score is showing 7% improvement meaning maybe we get 5-7% improvement. Not really better than Broadwell gains. The sadder part is that 5-7% improvement might be over Haswell.
 
Last edited:
Just like Xeon-D came out of nowhere and killed any hope ARM would have in servers I'm pretty sure Intel will react if AMD delivers next year
You are forgetting that Xeon-D was good because it had excellent positioning, not that the core was special. Current Intel chips do have excellent positioning against the competitor, which is mainly AMD.

You have servers where Xeon E7 v3 is showing the lowest gains ever - 20% over the last generation. With graphics where AMD barely matches the competitor with brand spanking memory, and them stuck at 28nm. Even mobiles are slowing down. That is a proof of one real important fact:

The times where we had great gains are over.

One unfailing thing that new technology delivers despite low enhancements performance-wise. It keeps allowing systems to become smaller. It does not seem isolated to Intel/AMD.
 
Last edited:
So the leaked desktop Skylake benchmarks do not look that impressive. But what about Skylake Y? I still have some hopes that might turn out better, so they improve clocks and reduce throttling compared to Broadwell Y.

Are there still no leaks on frequency, benchmarks, or similar for Skylake Y? They are supposed to be released in... what is it, August or September, right?
 
Last edited:
Yea, because only parts with eDRAM will be faster. Costs for new architecture will be spent once and its something they do anyway(R&D). With eDRAM its persistent cost with every part they sell. Its not like eDRAM speeds up many CPU applications anyway. The fact is there is also packaging complexity.
Keep in mind my response was tied to an unlikely (double) supposition that eDRAM is the main reason a 3.7Ghz Broadwell scores as much as a 4.2Ghz Skylake in Cinebench 15 ST. I was merely going down this path to show a fracture in this line of reasoning (with both eDRAM benefits being so great, and the Skylake leak showing no improvement over Broadwell).
 
eDRAM barely improves Cinebench R15 scores, it's mostly Broadwell's IPC. Lower clocked Broadwell matching Skylake doesn't make sense, just shows that these results should be taken with a grain of salt. The graphics drivers are plain outdated, no reason to think MB BIOS and the rest of the software is any different (just like the authors themselves mentioned).
 
That is a proof of one real important fact:

The times where we had great gains are over.


You are solely mistaken if you believe we'll never have 'great gains' again in the future. Of course we will.

Yes, in recent years the gains have diminished, though this is due to several factors, mainly Intel's complete lack of competition in several market segments.
 
You are solely mistaken if you believe we'll never have 'great gains' again in the future. Of course we will.

Yes, in recent years the gains have diminished, though this is due to several factors, mainly Intel's complete lack of competition in several market segments.

Competition isnt part of it. The core improvement have pretty much been stable since Core 2.

What do you see as great gains in the future? Specially in a world that is focused on performance/watt. (And dont say moar cores).
 
Back
Top