Hard to say, but ~20-30% more perf with no new instructions certainly isn't enough. I will wait for affordable CPU with AVX-512.How much would it take you?
20% IPC increase over HW likely?
1 or 0.
Just wanted to highlight it a bit. No tricks.0
Is this a trick question?
Can you speculate, that Intel may have something cool up its sleeve, and is just delaying its introduction until XYZ happens? In the GPU tech, we see massive improvements every 2-3 years. But here, we are still stuck with Core tech for a number of years. Maybe Intel is just lacking some talent in which case it should employ a bright head like youIntel would have to go to Itanium levels of changes in the microarchitecture in order to deliver that kind of an IPC improvement these days.
Just wanted to highlight it a bit. No tricks.
Can you speculate, that Intel may have something cool up its sleeve, and is just delaying its introduction until XYZ happens? In the GPU tech, we see massive improvements every 2-3 years. But here, we are still stuck with Core tech for a number of years. Maybe Intel is just lacking some talent in which case it should employ a bright head like you![]()
^ This. I deal with that interconnect nightmare at lower geometries daily, and that's not even touching on DFM issues (spacing, OPC pattern correction, density, fill, etc.)xtors probably could do it, but the shrunken metal lines (higher RC) just sucks. Its a tradeoff that comes down to cost vs. performance.
The reality is that the best xtors in the world would be best served with node N-1 BEOL metal. But those xtors are hamstrung with node N metal layers for the sake of reducing cost (die size).
:thumbsup: great post.There are plenty of tricks involved in delivering a 5%+ perf/clock increase at the same or higher clocks in a given area budget in a given thermal envelope.
In order to understand why Intel has given us relatively low perf/clock increases, you have to understand what Intel is trying to do. The charter for Intel's architects isn't to make a processor perform as well as possible on a given node subject to no constraints; rather, it is to deliver the best performance/watt subject to a whole host of constraints.
Examples of such constraints include:
* Schedule - an amazing design that delivers a huge performance boost over a prior gen design is no good if those implementing the chip can't get it out in a reasonable time-frame
* Cost - even for a company like Intel, it can't afford to spend like a drunken sailor on actually developing a given CPU core; it has the luxury of outspending its main competitors due to its very healthy financial situation, but as you increase the complexity of your design, you increase the number of man-hours required to get it done. If you need to hit schedule , you have to keep your scope in check so your costs don't get too unwieldy.
* Power consumption - optimizing for pure performance is a very different thing from optimizing for power-efficient performance. Every feature that goes in, Intel now requires (IIRC) a 2% perf boost for a 1% increase in power consumption. Intel is also constraining its designs to power envelopes relevant to the products it sells. 15W Ultrabook chips are pretty much all the rage these days in the PC world, so the CPU cores will be optimized for that design point first and foremost.
There is enormous complexity and a huge "bag of tricks" that goes into each new CPU generation; that is why CPU architects are generally so well paid. It's just people have very unrealistic expectations as to what those efforts can bring with each generation.
Ha ha, I liked it!Just wanted to highlight it a bit. No tricks.
Intel does do that at times, Quark is one example. But I doubt Intel is holding anything back on their primary revenue generating product lines. Look at the impact of stagnating demand in the PC group, nobody wants that and yet anybody (Intel included) can predict that it was bound to happen when you go two years without much more than a minor 100MHz clockspeed bump on a "refresh" and continue to see delays with its successor.Can you speculate, that Intel may have something cool up its sleeve, and is just delaying its introduction until XYZ happens? In the GPU tech, we see massive improvements every 2-3 years. But here, we are still stuck with Core tech for a number of years. Maybe Intel is just lacking some talent in which case it should employ a bright head like you![]()
It is hard to imagine how a 14nm quadcore with no HT is going to possibly run anywhere near 95W even at 3.9GHz. I thought the 4690 topped out around 70 watts in real world testing? And the 6MB cache is just a plain ol ripoff. These chips are going to be so frickin small they are going to be gouging the everloving crap out of us if they charge the same price as a 2500k which is like 5 times bigger. We are rapidly approaching the point where the Intel useless gpu tax is exceeding half the cost of the part. How long will enthusiasts tolerate that?
They'll tolerate it for as long as there isn't an alternative.
I suspect that is the reality for the majority of "enthusiasts". They like being enthusiasts, but don't like spending the money to acquire the hardware that sets them apart from the rest of the "non-enthusiasts".
And if that is the reason many enthusiasts purchase the iGPU-laden unlocked K processor (to save coin), only to then complain that they wish the iGPU wasn't present (so as to imply they should be able to spend less and save even more coin) then there really is a whole entire AMD product lineup that will support their enthusiast-on-the-cheap aspirations. Sans iGPU and all!
I suspect that is the reality for the majority of "enthusiasts". They like being enthusiasts, but don't like spending the money to acquire the hardware that sets them apart from the rest of the "non-enthusiasts".
I call FAKE on the specs for Skylake
Skylake: i7 6700K
4.0-4.2GHz - 4 Cores - 95W TDP
Haswell: i7 4790K
4.0-4.4GHz - 4 Cores - 88W TDP
Gone is supposedly the IVR from the chip. That should shave away some TDP from the CPU.
We are now at 14nm vs 22nm. Which should give HUGE benefits TDP wise.
According to those specs the TDP have actually increased 7W while clock have gone down 200MHz?
Nah, not buying that at all
So just because you're an enthusiast doesn't mean you're prepared to pay ridiculous amounts of money for miniscule improvements.
So what constitute a "prior release"? Ive seen dozen of 4770K and 4790K leaks before launch accurately describing 84/88W?As has been mentioned before, Intel puts chips into TDP "classes" prior to release. I don't think these are final TDP numbers.
Isn't there an alternative though? The LGA2011 platform?
These are two distinctly different platforms. One is for those who seemingly desire an APU-like processor, sans the multiplier limitation; whereas the other is seemingly for those who desire a distinctly non-APU processor.
Enthusiasts have choices, lots of choices. However, I'll grant you that what they might lack is disposable income or the desire to expend any of it.
Take myself for example, I'd love me some unlocked 18-core iGPU-less desktop action, but I'd also rather not spend $2000 so as to experience that computing pleasure.
I suspect that is the reality for the majority of "enthusiasts". They like being enthusiasts, but don't like spending the money to acquire the hardware that sets them apart from the rest of the "non-enthusiasts".
And if that is the reason many enthusiasts purchase the iGPU-laden unlocked K processor (to save coin), only to then complain that they wish the iGPU wasn't present (so as to imply they should be able to spend less and save even more coin) then there really is a whole entire AMD product lineup that will support their enthusiast-on-the-cheap aspirations. Sans iGPU and all!
(note: the only CPU I currently own that is sans iGPU is my AMD FX-8350)
So what constitute a "prior release"? Ive seen dozen of 4770K and 4790K leaks before launch accurately describing 84/88W?
Ah thanks, lol everything I said have been said earlier. Should stop being so lazy and read earlier comments before postingOfficial announcement by Intel would be my thought. Go back and look, you'll see that the 4770K and 4790K were both classified as 95W TDP in early leaks and roadmaps. Intel has been very tight lipped about Skylake, so it shouldn't be too surprising that we aren't getting the same quality of leaks as in the past.
There are sizable benefits for 18 cores vs 4 cores. Maybe not on your workloads which is the crux of your complaint but it's there.
I do have to wonder at this point if Intel will manage to actually ship Skylake-K this year, before the back to school season.
I still have my doubts - surely we would have seen Broadwell-k (all two chips) release by now if that were the case?
