I was talking with somebody who is still using an X2 4200+ in 2015 recently. I nearly collapsed. :awe:
Until benches are out speculation is just that. Skylake could be 5% faster or 55% faster. Only Intel knows.
My 2nd computer is still running a X2
I was talking with somebody who is still using an X2 4200+ in 2015 recently. I nearly collapsed. :awe:
Until benches are out speculation is just that. Skylake could be 5% faster or 55% faster. Only Intel knows.
I would bet it will be more like 5% again. In fact I bet Intel has all of the gains for the next 3 or 4 processors laid out in a very methodical manner. Since there isn't any competition there isn't any reason to "let it all go" at once.
Intel Lowers Its Q1'15 Outlook On Account Of Weakening PC Demand
http://www.forbes.com/sites/greatspeculations/2015/03/13/intel-lowers-its-q115-outlook-on-account-of-weakening-pc-demand/
Broadwell desktop is codenamed BDW-S. That is cancelled, as I've shown you.
BDW-S was never announced, how can this be cancelled? Even in old Roadmaps there wasn't such a thing, only BDW-K.
Historically, that was one of the biggest jumps in performance Intel has ever seen, though really the fair comparison was Presler to Conroe (Presler was the prevailing Intel desktop dual-core at the beginning of 2006, even if not very many people bought them). And of course you can see them in the above-posted chart as the PentiumD 950 and 960.
Just looking at the 960 vs the E6700 (3.6 ghz vs 2.66 ghz), the increase in IPC is massive just in Quake4 alone. I would be shocked if Skylake could show up at, oh I don't know 3 ghz and put up gains of nearly +55% vs a hypothetical 4 ghz Broadwell in a video game. At that point, the so-called "14 nm clockspeed problem" that Intel is allegedly having wouldn't matter a whit.
Intel's competition are the very CPUs it sold over past years. The world is full of 32nm Sandy Bridge customers who aren't quite convinced to upgrade to a Haswell refresh SKU.
If they approach the market as you are suggesting then they can expect lower and lower revenue as the shippable units drops from the resultant lack of demand.
I don't think Intel intentionally rolls out innovation in incremental fashion for the sake of stagnating today's sales on the premise that it will make tomorrow's sales a little bit higher than they would have otherwise been.
They want all that revenue now.
But I am sure they manage projects from a risk/reward perspective and it is very risky to throw a whole slew of unproven features into a spanking new "let it all go" product. Tick-tock itself is a risk-management approach, its very existence is proof positive that Intel is approaching changes in a very risk-conservative fashion.
Tick-tock isn't done so they can roll out 5% improvements and have the customers upgrade for lack of competition. They are acutely aware that 5% improvements makes for a lack of demand scenario as all the headlines reflected when Intel was forced to announce their guidance adjustment.
Here are bigger ones:
86 -> 286
286 -> 386
386 -> 486
486 -> Pentium
I would bet it will be more like 5% again. In fact I bet Intel has all of the gains for the next 3 or 4 processors laid out in a very methodical manner. Since there isn't any competition there isn't any reason to "let it all go" at once.
What they seem to be doing it over engineering one or two parts of the CPU to increase IPC so that the bottleneck is not only gone but opens things up so much that another area becomes the bottleneck, which is then opened up on the next iteration.
Intel could have done this all at once but there was no need.
For example, new branch predictor for Sandy Bridge.
Then Ivy gets dynamically partitioned internal structures and prefetcher improvement.
Then Haswell gets a wider execution engine...
486 to pentium was massive....
So was Pentium D to conroe?
486 to pentium was massive....
So was Pentium D to conroe?
Intel's competition are the very CPUs it sold over past years. The world is full of 32nm Sandy Bridge customers who aren't quite convinced to upgrade to a Haswell refresh SKU.
Did you carefully study Geekbench behavior in multithread? Are you sure it shares (modified) data across threads? If not, sharing L2 and/or having different interconnect topologies should have a very small impact on Geekbench MT score.Skylake does however have a noticeable boost in multithreaded performance. I am currently attributing this to a couple of changes that are likely to make in into Skylake (pulling from RWT's David Kanter here), the first being a shift to a "tiled" cache hierarchy, where L2 is shared between cores in groups of two, much like Bulldozer (arguably), and Apple's Cyclone. Sharing L2 between two cores is something that Intel has not chosen to do since they bolted an L3 cache on. It's not uncommon to see the L2 shared in L3-less chips (in fact, it's really all you'll ever see), however Intel chose a different route on chips with L3 included. They might be going back to a shared L2, and I think that's a smart move.
The other change that Kanter alluded to was a change in the interconnect fabric: from the current ring bus, to a 2D mesh. I don't really know many of the details surrounding this, although I'm fairly certain it'd be a good improvement to inter-core communication, as I recall ring buses being a pretty outdated concept.
And then what? If AMD becomes totally uncompetitive in desktop and mobile x86/x64 (they are dangerously close to that now), will Intel stop giving us 5% gains, and move to only introducing new ISAs / opcodes (doling them out, one new one per arch), giving us potential gains, only if we both buy their top-end CPUs, and buy new software.
I imagine you will see more gimmicks like the wireless charging in Skylake.
Or it was simply cancelled early enough that it never appeared on public roadmaps.
Haswell Refresh appeared on roadmaps at least as early as June 2013! Before Broadwell -K did.
Its improbable that Intel being stuck on Haswell for 2 years was a deliberate plan.
Intel's competition are the very CPUs it sold over past years. The world is full of 32nm Sandy Bridge customers who aren't quite convinced to upgrade to a Haswell refresh SKU.
If they approach the market as you are suggesting then they can expect lower and lower revenue as the shippable units drops from the resultant lack of demand.
I don't think Intel intentionally rolls out innovation in incremental fashion for the sake of stagnating today's sales on the premise that it will make tomorrow's sales a little bit higher than they would have otherwise been.
They want all that revenue now.
But I am sure they manage projects from a risk/reward perspective and it is very risky to throw a whole slew of unproven features into a spanking new "let it all go" product. Tick-tock itself is a risk-management approach, its very existence is proof positive that Intel is approaching changes in a very risk-conservative fashion.
Tick-tock isn't done so they can roll out 5% improvements and have the customers upgrade for lack of competition. They are acutely aware that 5% improvements makes for a lack of demand scenario as all the headlines reflected when Intel was forced to announce their guidance adjustment.
Here are bigger ones:
86 -> 286
286 -> 386
386 -> 486
486 -> Pentium
I don't think Intel intentionally rolls out innovation in incremental fashion for the sake of stagnating today's sales on the premise that it will make tomorrow's sales a little bit higher than they would have otherwise been.
They want all that revenue now.
I'm hoping that someday in the next 5 years Intel decides to direct its engineers to either develop a process node that increase fmax or to design a circuit that can compute "A+B=" 2x faster than today's processor can. (is 2x improvement over 5 years too much to expect anymore? I kinda feel like it is)
Intel's competition are the very CPUs it sold over past years. The world is full of 32nm Sandy Bridge customers who aren't quite convinced to upgrade to a Haswell refresh SKU.
If they approach the market as you are suggesting then they can expect lower and lower revenue as the shippable units drops from the resultant lack of demand.
I don't think Intel intentionally rolls out innovation in incremental fashion for the sake of stagnating today's sales on the premise that it will make tomorrow's sales a little bit higher than they would have otherwise been.
They want all that revenue now.
But I am sure they manage projects from a risk/reward perspective and it is very risky to throw a whole slew of unproven features into a spanking new "let it all go" product. Tick-tock itself is a risk-management approach, its very existence is proof positive that Intel is approaching changes in a very risk-conservative fashion.
Tick-tock isn't done so they can roll out 5% improvements and have the customers upgrade for lack of competition. They are acutely aware that 5% improvements makes for a lack of demand scenario as all the headlines reflected when Intel was forced to announce their guidance adjustment.
Prescott could literally compute A+B about twice as fast (half the effective latency) as Haswell today and with the same throughput. Fat lot of good that did it.
