Skylake-K SKUs leaked

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikk

Diamond Member
May 15, 2012
4,333
2,413
136
This news is written by Anton Shilov, so better forget it. This is originally based from the known leak, there is no GPU info in that leak.
 

Nothingness

Diamond Member
Jul 3, 2013
3,367
2,459
136
How much would it take you?
Hard to say, but ~20-30% more perf with no new instructions certainly isn't enough. I will wait for affordable CPU with AVX-512.

My last upgrade was i7 920 to 4770K and some of my ST programs got almost twice faster (granted, they are very specific, but still :)).
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
234
106
0

Is this a trick question?
Just wanted to highlight it a bit. No tricks.

Intel would have to go to Itanium levels of changes in the microarchitecture in order to deliver that kind of an IPC improvement these days.
Can you speculate, that Intel may have something cool up its sleeve, and is just delaying its introduction until XYZ happens? In the GPU tech, we see massive improvements every 2-3 years. But here, we are still stuck with Core tech for a number of years. Maybe Intel is just lacking some talent in which case it should employ a bright head like you :cool:
 
Mar 10, 2006
11,715
2,012
126
Just wanted to highlight it a bit. No tricks.


Can you speculate, that Intel may have something cool up its sleeve, and is just delaying its introduction until XYZ happens? In the GPU tech, we see massive improvements every 2-3 years. But here, we are still stuck with Core tech for a number of years. Maybe Intel is just lacking some talent in which case it should employ a bright head like you :cool:

There are plenty of tricks involved in delivering a 5%+ perf/clock increase at the same or higher clocks in a given area budget in a given thermal envelope.

In order to understand why Intel has given us relatively low perf/clock increases, you have to understand what Intel is trying to do. The charter for Intel's architects isn't to make a processor perform as well as possible on a given node subject to no constraints; rather, it is to deliver the best performance/watt subject to a whole host of constraints.

Examples of such constraints include:

* Schedule - an amazing design that delivers a huge performance boost over a prior gen design is no good if those implementing the chip can't get it out in a reasonable time-frame

* Cost - even for a company like Intel, it can't afford to spend like a drunken sailor on actually developing a given CPU core; it has the luxury of outspending its main competitors due to its very healthy financial situation, but as you increase the complexity of your design, you increase the number of man-hours required to get it done. If you need to hit schedule , you have to keep your scope in check so your costs don't get too unwieldy.

* Power consumption - optimizing for pure performance is a very different thing from optimizing for power-efficient performance. Every feature that goes in, Intel now requires (IIRC) a 2% perf boost for a 1% increase in power consumption. Intel is also constraining its designs to power envelopes relevant to the products it sells. 15W Ultrabook chips are pretty much all the rage these days in the PC world, so the CPU cores will be optimized for that design point first and foremost.

There is enormous complexity and a huge "bag of tricks" that goes into each new CPU generation; that is why CPU architects are generally so well paid. It's just people have very unrealistic expectations as to what those efforts can bring with each generation.
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,415
404
126
xtors probably could do it, but the shrunken metal lines (higher RC) just sucks. Its a tradeoff that comes down to cost vs. performance.
The reality is that the best xtors in the world would be best served with node N-1 BEOL metal. But those xtors are hamstrung with node N metal layers for the sake of reducing cost (die size).
^ This. I deal with that interconnect nightmare at lower geometries daily, and that's not even touching on DFM issues (spacing, OPC pattern correction, density, fill, etc.)

Also, didn't Intel obtain data regarding transistors with 1THz Ft quite awhile back on older tech?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
There are plenty of tricks involved in delivering a 5%+ perf/clock increase at the same or higher clocks in a given area budget in a given thermal envelope.

In order to understand why Intel has given us relatively low perf/clock increases, you have to understand what Intel is trying to do. The charter for Intel's architects isn't to make a processor perform as well as possible on a given node subject to no constraints; rather, it is to deliver the best performance/watt subject to a whole host of constraints.

Examples of such constraints include:

* Schedule - an amazing design that delivers a huge performance boost over a prior gen design is no good if those implementing the chip can't get it out in a reasonable time-frame

* Cost - even for a company like Intel, it can't afford to spend like a drunken sailor on actually developing a given CPU core; it has the luxury of outspending its main competitors due to its very healthy financial situation, but as you increase the complexity of your design, you increase the number of man-hours required to get it done. If you need to hit schedule , you have to keep your scope in check so your costs don't get too unwieldy.

* Power consumption - optimizing for pure performance is a very different thing from optimizing for power-efficient performance. Every feature that goes in, Intel now requires (IIRC) a 2% perf boost for a 1% increase in power consumption. Intel is also constraining its designs to power envelopes relevant to the products it sells. 15W Ultrabook chips are pretty much all the rage these days in the PC world, so the CPU cores will be optimized for that design point first and foremost.

There is enormous complexity and a huge "bag of tricks" that goes into each new CPU generation; that is why CPU architects are generally so well paid. It's just people have very unrealistic expectations as to what those efforts can bring with each generation.
:thumbsup: great post.

I chuckled at the "spend like a drunken sailor" quip, funny but very true.

Only HP can get away with that kind of random-walk business model and not have an activist investor like Ackman take them down.

Just wanted to highlight it a bit. No tricks.
Ha ha, I liked it!

Can you speculate, that Intel may have something cool up its sleeve, and is just delaying its introduction until XYZ happens? In the GPU tech, we see massive improvements every 2-3 years. But here, we are still stuck with Core tech for a number of years. Maybe Intel is just lacking some talent in which case it should employ a bright head like you :cool:
Intel does do that at times, Quark is one example. But I doubt Intel is holding anything back on their primary revenue generating product lines. Look at the impact of stagnating demand in the PC group, nobody wants that and yet anybody (Intel included) can predict that it was bound to happen when you go two years without much more than a minor 100MHz clockspeed bump on a "refresh" and continue to see delays with its successor.

If you really want to give yourself a scare though, sum up the estimated grand total R&D investments you figure Intel has spent in optimizing the core architecture up the point of Broadwell. Look at the billions and billions of development dollars it took (hand wavy estimate is $6B-$8B) , and the number of years it took, to get to the point where a product with the IPC and performance of Broadwell could be a reality.

Now ask yourself who is going to start over, invest that many billions yet again into developing an entirely new architecture?

I don't know if Intel will ever step outside the core sandbox again. I foresee them continuing to plunk down a billion or two additional investment to further optimize on the existing asset that is the core architecture.

GPU products don't cost billions to develop, and the nature of their parallelized workload is such that scaling xtor count pretty much correlates to scaling performance.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
It is hard to imagine how a 14nm quadcore with no HT is going to possibly run anywhere near 95W even at 3.9GHz. I thought the 4690 topped out around 70 watts in real world testing? And the 6MB cache is just a plain ol ripoff. These chips are going to be so frickin small they are going to be gouging the everloving crap out of us if they charge the same price as a 2500k which is like 5 times bigger. We are rapidly approaching the point where the Intel useless gpu tax is exceeding half the cost of the part. How long will enthusiasts tolerate that?

They'll tolerate it for as long as there isn't an alternative.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
They'll tolerate it for as long as there isn't an alternative.

Isn't there an alternative though? The LGA2011 platform?

These are two distinctly different platforms. One is for those who seemingly desire an APU-like processor, sans the multiplier limitation; whereas the other is seemingly for those who desire a distinctly non-APU processor.

Enthusiasts have choices, lots of choices. However, I'll grant you that what they might lack is disposable income or the desire to expend any of it.

Take myself for example, I'd love me some unlocked 18-core iGPU-less desktop action, but I'd also rather not spend $2000 so as to experience that computing pleasure.

I suspect that is the reality for the majority of "enthusiasts". They like being enthusiasts, but don't like spending the money to acquire the hardware that sets them apart from the rest of the "non-enthusiasts".

And if that is the reason many enthusiasts purchase the iGPU-laden unlocked K processor (to save coin), only to then complain that they wish the iGPU wasn't present (so as to imply they should be able to spend less and save even more coin) then there really is a whole entire AMD product lineup that will support their enthusiast-on-the-cheap aspirations. Sans iGPU and all!

(note: the only CPU I currently own that is sans iGPU is my AMD FX-8350 :p)
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
I suspect that is the reality for the majority of "enthusiasts". They like being enthusiasts, but don't like spending the money to acquire the hardware that sets them apart from the rest of the "non-enthusiasts".

And if that is the reason many enthusiasts purchase the iGPU-laden unlocked K processor (to save coin), only to then complain that they wish the iGPU wasn't present (so as to imply they should be able to spend less and save even more coin) then there really is a whole entire AMD product lineup that will support their enthusiast-on-the-cheap aspirations. Sans iGPU and all!

Most accurate statement on said on these forums in the past few years.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I call FAKE on the specs for Skylake

Skylake: i7 6700K
4.0-4.2GHz - 4 Cores - 95W TDP

Haswell: i7 4790K
4.0-4.4GHz - 4 Cores - 88W TDP

Gone is supposedly the IVR from the chip. That should shave away some TDP from the CPU.
We are now at 14nm vs 22nm. Which should give HUGE benefits TDP wise.

According to those specs the TDP have actually increased 7W while clock have gone down 200MHz?
Nah, not buying that at all
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,598
739
126
I suspect that is the reality for the majority of "enthusiasts". They like being enthusiasts, but don't like spending the money to acquire the hardware that sets them apart from the rest of the "non-enthusiasts".

I think enthusiasts are price sensitive just like other customers. They are prepared to spend extra money, but they have to get sufficiently large benefit from that to go ahead with the purchase.

E.g. most enthusiasts will not pay 3x the money for 30% more performance. But they might be willing to pay 50% more for 30% more performance, something the average Joe usually will not do. That sets them apart from the "non-enthusiasts".

So just because you're an enthusiast doesn't mean you're prepared to pay ridiculous amounts of money for miniscule improvements.
 

dahorns

Senior member
Sep 13, 2013
550
83
91
I call FAKE on the specs for Skylake

Skylake: i7 6700K
4.0-4.2GHz - 4 Cores - 95W TDP

Haswell: i7 4790K
4.0-4.4GHz - 4 Cores - 88W TDP

Gone is supposedly the IVR from the chip. That should shave away some TDP from the CPU.
We are now at 14nm vs 22nm. Which should give HUGE benefits TDP wise.

According to those specs the TDP have actually increased 7W while clock have gone down 200MHz?
Nah, not buying that at all

As has been mentioned before, Intel puts chips into TDP "classes" prior to release. I don't think these are final TDP numbers.
 

kimmel

Senior member
Mar 28, 2013
248
0
41
So just because you're an enthusiast doesn't mean you're prepared to pay ridiculous amounts of money for miniscule improvements.

There are sizable benefits for 18 cores vs 4 cores. Maybe not on your workloads which is the crux of your complaint but it's there.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
As has been mentioned before, Intel puts chips into TDP "classes" prior to release. I don't think these are final TDP numbers.
So what constitute a "prior release"? Ive seen dozen of 4770K and 4790K leaks before launch accurately describing 84/88W?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Isn't there an alternative though? The LGA2011 platform?

These are two distinctly different platforms. One is for those who seemingly desire an APU-like processor, sans the multiplier limitation; whereas the other is seemingly for those who desire a distinctly non-APU processor.

Enthusiasts have choices, lots of choices. However, I'll grant you that what they might lack is disposable income or the desire to expend any of it.

Take myself for example, I'd love me some unlocked 18-core iGPU-less desktop action, but I'd also rather not spend $2000 so as to experience that computing pleasure.

I suspect that is the reality for the majority of "enthusiasts". They like being enthusiasts, but don't like spending the money to acquire the hardware that sets them apart from the rest of the "non-enthusiasts".

And if that is the reason many enthusiasts purchase the iGPU-laden unlocked K processor (to save coin), only to then complain that they wish the iGPU wasn't present (so as to imply they should be able to spend less and save even more coin) then there really is a whole entire AMD product lineup that will support their enthusiast-on-the-cheap aspirations. Sans iGPU and all!

(note: the only CPU I currently own that is sans iGPU is my AMD FX-8350 :p)

True, however, in the context of the person I quoted, he was complaining about cost, so I somehow doubt that he was pushing for a LGA 2011 alternative.

I personally would like 2011 to have an iGPU... I love QuickSync
 
Last edited:

dahorns

Senior member
Sep 13, 2013
550
83
91
So what constitute a "prior release"? Ive seen dozen of 4770K and 4790K leaks before launch accurately describing 84/88W?

Official announcement by Intel would be my thought. Go back and look, you'll see that the 4770K and 4790K were both classified as 95W TDP in early leaks and roadmaps. Intel has been very tight lipped about Skylake, so it shouldn't be too surprising that we aren't getting the same quality of leaks as in the past.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Official announcement by Intel would be my thought. Go back and look, you'll see that the 4770K and 4790K were both classified as 95W TDP in early leaks and roadmaps. Intel has been very tight lipped about Skylake, so it shouldn't be too surprising that we aren't getting the same quality of leaks as in the past.
Ah thanks, lol everything I said have been said earlier. Should stop being so lazy and read earlier comments before posting :p

I guess you are right then. That 6600K @ 3.9GHz with 4 threads and 6700K @4.2GHz with 8 threads have the same 95W TDP should have been a dead giveaway.

Nevermind my comments then :rolleyes:
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,598
739
126
There are sizable benefits for 18 cores vs 4 cores. Maybe not on your workloads which is the crux of your complaint but it's there.

Yes, I agree that will result in a major performance increase for some workloads. But it's also an expensive piece of hardware, even for enthusiasts. There is a limit to how much enthusiasts are willing to pay too.
 

Dave2150

Senior member
Jan 20, 2015
639
178
116
I do have to wonder at this point if Intel will manage to actually ship Skylake-K this year, before the back to school season.

I still have my doubts - surely we would have seen Broadwell-k (all two chips) release by now if that were the case?
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,598
739
126
I do have to wonder at this point if Intel will manage to actually ship Skylake-K this year, before the back to school season.

I still have my doubts - surely we would have seen Broadwell-k (all two chips) release by now if that were the case?

Yes, otherwise it would give Broadwell-K a ridiculously short lifespan.

But from the tech specs that have been presented so far, it looks like Broadwell-K (renamed to C, as in i7-5775C, i5-5675C) may in fact be more closely related to 4770R than 4770K. That is despite the somewhat confusing K naming (now C), implying a 4770K successor.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
23,204
13,289
136
Do remember that the C chips have R brethren with the same specs. That is, i7-5775R, etc. Not sure if they're unlocked like the C chips.