Is mainstream desktop CPU development "completed"?

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
So here's the facts:

* Nothing much has happened on desktop performance-wise since Sandy Bridge.
* 14 nm Broadwell brought no major IPC or frequency increase.
* 14 nm Skylake is not expected to do that either.
* Same for 32->22 nm.

So that leaves us with the question:

Is desktop CPU development more or less "completed" as far as performance goes?

Extrapolating the future based on the previous 5 years of progress clearly indicate that. Is there anything on the horizon that may turn the development in a different direction, or is this what can be expected going forward too?

PS. Note that I'm only talking about CPU performance increase, not lower TDP or iGPU performance improvements.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
There, now that we have that cleared up.

Mainstream is wherever the volume is, which isn't PCs. Mainstream, aka Phones and Tablets have a long way to go performance wise.

Which isn't the topic of this thread...
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Possibly.

If there would be another breakthrough, I bet it will be related to manufacturing.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
Possibly.

If there would be another breakthrough, I bet it will be related to manufacturing.

Interesting. Care to clarify what kind of breakthrough you are thinking about? And would it affect performance in any way?
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
That's an unfair comparison. Stock clock speed went up, by a lot. Maximum overclocked clock speed didn't go up much, if at all, though.

Something something goalposts......

But anyways, I don't have that data on hand so I can't spit out a trivial reply so quickly. :p
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
So here's the facts:

* Nothing much has happened on desktop performance-wise since Sandy Bridge.
* 14 nm Broadwell brought no major IPC or frequency increase.
* 14 nm Skylake is not expected to do that either.
* Same for 32->22 nm.

So that leaves us with the question:

Is desktop CPU development more or less "completed" as far as performance goes?

Extrapolating the future based on the previous 5 years of progress clearly indicate that. Is there anything on the horizon that may turn the development in a different direction, or is this what can be expected going forward too?

PS. Note that I'm only talking about CPU performance increase, not lower TDP or iGPU performance improvements.

There are various technologies, which to varying extents, are already available, which will (potentially) MASSIVELY improve the performance of desktop PCs.

I've used some of them, and they are amazing. When they can come to the mainstream market place, it will be a big improvement.

Examples are:
Imagine having a 72 thread, 36 core computer. You already can get them (but you are talking about $15,000, fully configured, very approximately).
Although the server(s) I was playing about with, was NOT that powerful, it was AMAZINGLY faster than current desktop PCs. I ran more than 100 things at the same time (for the heck of it), and it was still happy to do more!

(Obviously, NOT with single core applications, but when you do stuff, which can usefully utilize so many cores and threads).

As time goes on, huge number of cores in a desktop PC, along with progress on software which can usefully employ them, is very likely to come along (my opinion, some think that there are fundamental limits, to software's ability to usefully use a huge number of cores. Time will answer this question, I guess). Possibly with many stacked dies, in the same processor package. When/if the technical hurdles can be solved.

E.g. 100,000 Cores, in a $499 PC. (When 2025??, 2125??, year 9999?? etc)

(A bit like you can have 20,000 threads ??? (not sure of exact amount), in a modern, affordable graphics card).

Fully re-configurable (whatever it ends up being called) cpus. Whereby the internals of some/most/all of the cpu can be reprogrammed, at will, to suit what you want to do.

You can already get them (they are called FPGAs). One that I was playing with, was so fast and re-programmable, that I was able to "program" it to pretend to be a very high speed graphics generator, with the FPGA plugging straight into a monitor. I did NOT write the "program" for it (VHDL). It came from the internet and/or the FPGA manufacturer.

What was amazing about it, is that if you wrote a program on a PC. Its loop time would be way too slow, to send the image to a monitor, in real time. Whereas the FPGA, happily sends the image to the monitor in real time. Because they can happily run, even very complicated programs, at many hundreds of megahertz. Because they essentially do most things in parallel.

We sort of hit a laws of physics limit, in the past, when we transitioned from tubes/valves, to massively faster discreet transistors.

Similarly, there was a massive speedup, going from discreet transistors, to Integrated Circuits.

Potentially, other invention(s), may take place in the coming decades, which will massively speed up (even single core stuff), things even more.

Just before Intel started up, computers were either made out of discreet transistors, or made up of many (usually hundreds, but not always, sometimes only a few) different integrated circuits. Intel created the worlds first commercial single chip microprocessor (some people disagree Intel was first, because there was the odd, similar chip, here and there, such as one used in a secret fighter jet project).

So it could be a completely different company that creates the next, greatest thing, in the world of computing.

It would be anyone's guess as to what it will be. Nano-tubes, Graphene, Light based, Quantum particles, non-Silicon materials etc etc.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
So here's the facts:

* Nothing much has happened on desktop performance-wise since Sandy Bridge.
* 14 nm Broadwell brought no major IPC or frequency increase.
* 14 nm Skylake is not expected to do that either.
* Same for 32->22 nm.

So that leaves us with the question:

Is desktop CPU development more or less "completed" as far as performance goes?

Extrapolating the future based on the previous 5 years of progress clearly indicate that. Is there anything on the horizon that may turn the development in a different direction, or is this what can be expected going forward too?

PS. Note that I'm only talking about CPU performance increase, not lower TDP or iGPU performance improvements.

Regarding IPC increases, I think if Intel is able to successfully implement SMT with a greater number of threads per core (eg, four threads per core) then I think there would be good reason to increase single core IPC.

So for our dual cores we would have 2C/8T as the top configuration.

In fact, I have been wondering if the Core architecture one day (with multiple way SMT) becomes so efficient that it could replace quad core atom in many applications? (One Haswell big core has about the same die size as four silvermont atom cores)

With that mentioned, I have read there are some security concerns with SMT (vs. individual cores each running only one thread)....so Intel would have to address that in the design on the processor.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Sorry, not seeing much revolution there in terms of CPU performance increase, given the time that has passed between those CPU models.

So what you MEAN to say in your OP is that the compound annual growth rate is below your expectations. Not that "nothing has happened". :p
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
So what you MEAN to say in your OP is that the compound annual growth rate is below your expectations. Not that "nothing has happened". :p

Yeah, if that's the way you'd like to put it. I thought it was obvious.

So to clarify:

"The mainstream desktop CPU performance increases per year are ridiculously low compared to how it used to be 10+ years ago."

Any objections to that? :confused:
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Yeah, if that's the way you'd like to put it. I thought it was obvious.

So to clarify:

"The mainstream desktop CPU performance increases per year are ridiculously low compared to how it used to be 10+ years ago."

Any objections to that? :confused:

I tend to nitpick at hyperbole. :)
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
Top-end performance hasn't moved much, but perf/watt has. That's what is important in mobile and servers - which is where everything is going.

I am honestly not THAT disappointed, without something significantly better out there the itch to upgrade isn't that hard to not scratch, which leaves me more money for other things. ... Like mobile :colbert:
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
There, now that we have that cleared up.

Mainstream is wherever the volume is, which isn't PCs. Mainstream, aka Phones and Tablets have a long way to go performance wise.

Which isn't the topic of this thread...

Fjodor2001, I think you were too quick to jump to a defensive position by dismissing kimmel's point.

Why did desktop CPUs experience the annualized compound rate of performance improvements in the past that you now note is lacking?

Intel, AMD, Texas Instruments, Via, NexGen, WinIDT, etc. all built desktop CPUs and rushed about as frantically as possible for more than a decade trying to get faster and faster models out in front of the desktop consumer.

Consumer's bought the chips, justifying the development expenses and business risks that preceded the creation of those chips.

And then guess what happened to those desktop consumers? They started to care less and less about desktop performance. More and more of them started liking the idea of mobile computing, having a lighter and longer-lasting laptop was worth their consumer dollars.

So when you ask where did the development momentum go in the desktop market, momentum that was dependent on desktop consumers wanting to buy ever-higher performing desktop CPUs, you have to ask yourself where did the consumer's dollars themselves go?

And to kimmel's credit and astute synopsis, those revenue dollars went to Apple, they went to Samsung, and Intel's own mobile product offerings.

So why would Intel, or AMD for that matter, justify sinking ever higher R&D expenses into the development of uber faster desktop processors when the markets have spoken, voted with their wallets, and are buying up smartphones and tablets and silly thin netbooks/laptops instead of desktops?

The premise of your argument in the OP appears to be that you believe the pace of advancement stagnated, and thus the consumer had no choice but to migrate to other compute platforms and spend their money on mobile products. I don't buy that, the development money follows where decision makers think the markets are headed.

In circa 1880 there were a number of companies developing the next generation of leading-edge horse drawn carriages. And then the day came where they stopped investing in developing the next best horse carriage.

Did the end of an era of horse-drawn carriages come about because the pace of development stagnated? Or did it come about because consumers abandoned that market and pursued the acquisition of a more feature-compelling product called the automobile? Forcing horse carriage companies to allocate R&D appropriately in order to best survive the transition?

I think kimmel is right on the money. The focus is not on desktop performance improvements because the consumer markets are no longer focused on it either. Consumer markets shifted, they want mobility and other features (wireless charging, faster network speeds, etc), and these companies have shifted their R&D priorities accordingly.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
So here's the facts:

* Nothing much has happened on desktop performance-wise since Sandy Bridge.
* 14 nm Broadwell brought no major IPC or frequency increase.
* 14 nm Skylake is not expected to do that either.
* Same for 32->22 nm.

So that leaves us with the question:

Is desktop CPU development more or less "completed" as far as performance goes?

The desktop development is complete as much as the development of wireless conventional phones and or typewriters are also complete.

Basically the desktop sits in an odd place today, too cumbersome for light use, and too weak when compared to the cloud. All the desktop killer apps, office applications and the internet, moved up to the cloud or mobile platforms. All that's left is gaming, but Microsoft is working hard to ensure that the new companies won't even consider windows as their platform of choice.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
All that's left is gaming, but Microsoft is working hard to ensure that the new companies won't even consider windows as their platform of choice.

My G3258 overclocked to 4.3 Ghz, R7 250X and the Mantle API informally tested at a resolution of 720p (high detail setting) currently have faster performance than a Xbox one (which runs the game at 720p) in 64 player in that particular scene of that game I linked (which supposedly really stresses the CPU):

http://forums.anandtech.com/showpost.php?p=37279088&postcount=173

With that mentioned, apparently Xbox One had 10% of its GPU reserved for Kinect,and this will no longer be the case in the future. Also I expect DX12 will help Xbox One yet again (this possibly with better optimization for ESRAM).

However, I still begin to wonder if EA/DICE begins considering the idea of moving Battlefield (via Vulkan API which is the sucessor to Mantle API) over to Linux in order to cut out the any Microsoft tax at some point. This especially with the price of ultra cheap desktop hardware being so low now relative to the price of the Microsoft software license.
 
Last edited:

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
However, I still begin to wonder if EA/DICE begins considering the idea of moving Battlefield (via Vulkan API which is the sucessor to Mantle API) in order to cut out the any Microsoft tax at some point. This especially with the price of ultra cheap desktop hardware being so low now relative to the price of the Microsoft software license.

The problem is much more fundamental than a Microsoft tax. Developers pay an Apple tax, a Steam tax or a Google tax, and that doesn't prevent them from developing to these platforms and making very good money on it. But with windows there are some caveats.

Microsoft should be building windows as an easy, stable platform for companies to use and sell their applications, further improving the value of the windows ecosystem. Instead they are competing with their partners (browsers, antivirus, app stores, etc) and making it even more closed. Why would you build a new business around the windows platform when a few years down the road Microsoft can be choking your business by incorporating a free feature on Windows and compete with you? Microsoft could afford to do that when windows was the only game in town, but today the mobile platforms are where the money is heading, so the new gaming companies will go there. And the stablished gaming companies can enjoy plenty of stability on the console market.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
but today the mobile platforms are where the money is heading, so the new gaming companies will go there. And the stablished gaming companies can enjoy plenty of stability on the console market.

I think some kind of Linux gaming console/desktop could be competitive too (Example: Valve offers a way for developers to launch games in Steam by community vote) , but we will need to see Intel and other companies make further advances in cheap relatively high performance desktop chips (like the G3258).

The question is how does Intel make a $40 to $60 desktop level processor better every year despite the focus on mobile and server?

Hopefully they find a way. Like I mentioned, I think adding multi-way SMT could be one way to allow Intel justification for larger cores even with the needed focus on performance per watt. That and various ways of managing dark silicon within the core during certain operations.