Is mainstream desktop CPU development "completed"?

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jpiniero

Lifer
Oct 1, 2010
16,830
7,279
136
Sure, all of those out of topic aspects are valid. But as far as desktop CPU performance goes, would you consider that development to be more or less "completed"?

Well, to answer your question... If you are talking about Core's high end desktops... yes for the time being. If anything, performance should regress as heat density kills the clock speed eventually as the nodes shrink. I'm sure Intel will do something about it eventually, but I imagine it's not a high priority. What's high priority now is taming the power consumption.
 

Revolution 11

Senior member
Jun 2, 2011
952
79
91
Sure, all of those out of topic aspects are valid. But as far as desktop CPU performance goes, would you consider that development to be more or less "completed"?
Unfortunately yes. As far as my gaming concerns, I am not even close to being CPU-limited but in emulation, more IPC and clocks are always welcome.

But this future of limited CPU progress was predicted as early as 2003-2005 when Dennard scaling stopped and clocks hit a power/heat wall. And Amdahl's Law makes more than a single core give diminishing returns.
 

Hulk

Diamond Member
Oct 9, 1999
5,146
3,746
136
Idon'tcare has a great post on the previous page. Consumer demand is what drives the CPU's we get, as well as every other product.

Whenever you have literally billions of people buying a product the money pouring into that product is an irresistable force. If consumer demand for higher desktop performance was there we'd get it. Intel is in business to make money. Period. Pure and simple. And they do that by making a product that people want.

Imagine if you will that the CPU was a niche product with only say 100,000 needing to be manufactured every year. They'd probably be 286's. The complexity of design, engineering, and manufacturing that go into modern processors is astounding. It requires thousands of talented people and literally billions of dollars. And it is totally driven by consumer demand. We get what we want. And by "we" I mean the majority.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Idon'tcare has a great post on the previous page. Consumer demand is what drives the CPU's we get, as well as every other product.

Whenever you have literally billions of people buying a product the money pouring into that product is an irresistable force. If consumer demand for higher desktop performance was there we'd get it. Intel is in business to make money. Period. Pure and simple. And they do that by making a product that people want.

Imagine if you will that the CPU was a niche product with only say 100,000 needing to be manufactured every year. They'd probably be 286's. The complexity of design, engineering, and manufacturing that go into modern processors is astounding. It requires thousands of talented people and literally billions of dollars. And it is totally driven by consumer demand. We get what we want. And by "we" I mean the majority.

Well, the "majority", is pretty stupid, to accept those new low-power "tablet-style" CPUs/APUs, in a desktop.

But maybe the (consumer) desktop is dead, after all, and it's just lingering on somehow, with these cheaper-to-produce tablet CPUs.

Or people have been brought up with "expectations", that computers "always" got faster, cheaper, smaller, etc., and then found out the hard way that newer budget PCs were markedly slower than older ones.

I would rather have an AM2 X2 6000+, even with all of it's 125W power-consumption, over a Brazos CPU, in a desktop, for example. (At least, I think so.) Of course, it would help to have a decent chipset in that machine too.
 

tenks

Senior member
Apr 26, 2007
287
0
0
Fjodor2001, I think you were too quick to jump to a defensive position by dismissing kimmel's point.

Why did desktop CPUs experience the annualized compound rate of performance improvements in the past that you now note is lacking?

Intel, AMD, Texas Instruments, Via, NexGen, WinIDT, etc. all built desktop CPUs and rushed about as frantically as possible for more than a decade trying to get faster and faster models out in front of the desktop consumer.

Consumer's bought the chips, justifying the development expenses and business risks that preceded the creation of those chips.

And then guess what happened to those desktop consumers? They started to care less and less about desktop performance. More and more of them started liking the idea of mobile computing, having a lighter and longer-lasting laptop was worth their consumer dollars.

So when you ask where did the development momentum go in the desktop market, momentum that was dependent on desktop consumers wanting to buy ever-higher performing desktop CPUs, you have to ask yourself where did the consumer's dollars themselves go?

And to kimmel's credit and astute synopsis, those revenue dollars went to Apple, they went to Samsung, and Intel's own mobile product offerings.

So why would Intel, or AMD for that matter, justify sinking ever higher R&D expenses into the development of uber faster desktop processors when the markets have spoken, voted with their wallets, and are buying up smartphones and tablets and silly thin netbooks/laptops instead of desktops?

The premise of your argument in the OP appears to be that you believe the pace of advancement stagnated, and thus the consumer had no choice but to migrate to other compute platforms and spend their money on mobile products. I don't buy that, the development money follows where decision makers think the markets are headed.

In circa 1880 there were a number of companies developing the next generation of leading-edge horse drawn carriages. And then the day came where they stopped investing in developing the next best horse carriage.

Did the end of an era of horse-drawn carriages come about because the pace of development stagnated? Or did it come about because consumers abandoned that market and pursued the acquisition of a more feature-compelling product called the automobile? Forcing horse carriage companies to allocate R&D appropriately in order to best survive the transition?

I think kimmel is right on the money. The focus is not on desktop performance improvements because the consumer markets are no longer focused on it either. Consumer markets shifted, they want mobility and other features (wireless charging, faster network speeds, etc), and these companies have shifted their R&D priorities accordingly.

Enjoyed reading this response. So well put and spot on, good post man.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
In short: Progress can be stopped by market economics. I.e. R&D money go into areas where there is consumer demand and thus profit to be made. And if there is no consumer demand in a certain area, progress will slow down of stop.

That is correct. But it does not paint a complete picture. There are other reasons progress can be stopped too. In this case most importantly the laws of physics. I.e. some progress can be stopped simply because physics make it impossible to make further progress (at least at the same rate as before). No matter how much consumer demand there is for it, and how much R&D money is put into it, it cannot be done.

So the question is: Let's assume there was a higher demand for faster desktop CPUs. Do you really think we nowadays could see desktop CPU performance increase progress like we used to in the 1970--2005 era? Remember, then we had a yearly performance increase of 70% or so, nowadays it's 7% or less.

From what I've gathered, physics has also put an end to high performance increases on desktop CPUs, not only market economics.

In other words: I do not think Intel could produce a desktop CPU that is 1.7^5=14.2 times faster than a 4790K in 5 years from now. Not even close. That is regardless if there was market demand for such a chip, and increasing desktop CPU performance was the main focus for Intel. Or do you disagree with that?
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Lets look on IPC increases using CPUMark99.

1989 - 486 25.00
1995 - Pentium P54C 16.40 52.4%
1997 - Pentium P55C (MMX) 14.90 10.1%
1995 - Pentium Pro 10.50 41.9%
1998 - Celeron Mendocino 12.50 -16.0%
1998 - Pentium II (Dechutes) 13.20 -5.3%
1999 - Pentium III (Katmai) 13.00 1.5%
1999 - Pentium III (Coppermine) 11.20 16.1%
2000 - Pentium 4 Willamette 17.00 -34.1%
2001 - Pentium III (Tulatin) 11.00 54.5%
2002 - Pentium 4 Northwood 15.80 -30.4%
2004 - Pentium 4 Prescott 20.70 -23.7%
2006 - Conroe 7.10 191.5%
2007 - Penryn 6.90 2.9%
2008 - Nehalem 6.50 6.2%
2010 - Westmere 6.60 -1.5%
2011 - Sandy Bridge 6.40 3.1%
2012 - Ivy Bridge 6.30 1.6%
2013 - Haswell 5.80 8.6%
2014 - Broadwell 5.35 8.3%

(A 2004 Pentium M scores 8.2)

So legacy IPC increases have been pretty good and stable troughout the years if you exclude the P4 line. The performance since have come in new instructions, more cores and higher frequency. In the same reference using linpack or similar, my 4670 is ~80% faster than my 3570K.

Increasing CPU performance isnt the issue. Using the CPU performance delivered outside the legacy segment is. And thats where the desktop segment tanks massively. Because the 99%+ of desktop software just isnt fitted for the task. Servers on the other hand had massive performance increase in all metrics. Mobile had massive performance/watt increase.

Nobody besides a dinosaur niche is interested in (radically) higher desktop performance either contra the tradeoffs. The consumer today they want smaller, more integrated, lower power and more silent computing. And that includes on the desktop.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
IPC yes, but you're missing that we also used to see massive frequency increases, going from 33->66->200->800->[...] MHz at a quite rapid pace. That was what used to account for most of the performance increase. Now the frequency increase has more or less ground to a halt for the last 5 years, and there is no improvement in sight.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
We also had massive power consumption increases. If we had to follow the trend we would sit with 500W CPUs today. And people dont want that.

Frequency have been stuck more than 5 years. Its over 10 years now.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
We also had massive power consumption increases. If we had to follow the trend we would sit with 500W CPUs today. And people dont want that.

Frequency have been stuck more than 5 years. Its over 10 years now.

The thing is that previously you paid a much less TDP penalty per frequency increase than today. E.g. in ~2 years time you could often double the frequency at only 30% or so higher TDP (given the benefit of a later node too).

Do you think Intel could produce a 4790K replacement ~2 years after it was introduced that was running at 8/8.8 GHz instead of 4/4.4 GHz? At only 30% higher TDP, so 88*1.3=114 W? I don't think so. No matter if there was much higher consumer demand for faster desktop CPUs and it was the primary focus for Intel. Not even close.

Again, physics is also putting an end to the fun these days. Not only market economics and consumer demand.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
You would increase performance outside just frequency. The current design rule is 1% performance for 0.5% power increase.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Its quite simple. The design rule since Core 2 have been that 1% performance increase may only increase the power usage by 0.5%. So anything increasing the performance with 1% or more, and increasing the power consumption with 0.5% or less is being implemented. Else its being discarded.

So if someone came and said: Look I can increase performance with 100%, it just increases power consumption with 60%. then its being discarded.

In your case 30% increase in TDP+a node shink would end up around 160% power increase. So you simply traded more performance for even more power consumption.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
Its quite simple. The design rule since Core 2 have been that 1% performance increase may only increase the power usage by 0.5%. So anything increasing the performance with 1% or more, and increasing the power consumption with 0.5% or less is being implemented. Else its being discarded.

So if someone came and said: Look I can increase performance with 100%, it just increases power consumption with 60%. then its being discarded.

In your case 30% increase in TDP+a node shink would end up around 160% power increase. So you simply traded more performance for even more power consumption.

So you're saying that if Intel didn't have this design rule, they could in two years produce a 4790K replacement that was twice as fast at only 114W W TDP?

Really...? By doubling the IPC, or frequency? Or perhaps a combo; 50% higher IPC and 50% higher frequency, all in two years? I'm not sure if you're being serious.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Who knows what they could do. Hypothetical products are useless. What if ti didnt have IGP, what if it didnt have this and that.

Take a look on the 5960X, how much difference the cache for example makes. A 3/3.5Ghz CPU running in circles around a 4/4.4Ghz. And its not due to more cores.

But again, with the right benchmark Haswell is already 80% faster than IB core for core. And I am sure a Skylake with AVX512 will be as well contra Haswell.
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
5,146
3,746
136
Well, the "majority", is pretty stupid, to accept those new low-power "tablet-style" CPUs/APUs, in a desktop.



It's frustrating. I know. But we can't fault the majority when for the most part all they're doing is browsing the web, writing/reading e-mails, and maybe writing a letter now and then. For what they're doing they have all the power they need. People that are not computer literate realize that they aren't waiting on their computers like they were 15 years ago.

Gaming is probably our last best hope for faster desktop processors and GPU's.

As it is we're kind of getting the "scraps" from mobile development. Ivy, Haswell, and Broadwell were absolutely focused on low power first and performance second. For the desktop we got a higher power derivative of those parts. In the past the mobile part was a derivative of the part designed specifically to be a desktop performance part and it would be scaled back power-wise, or would be a completely different design as in Banias and Dothan.

And of course the situation is exacerbated by the fact that Intel simply has no competition in the desktop space. One would hope that with Intel focused to heavily on mobile there would be an opening for AMD to create a desktop performance chip.
 

mikk

Diamond Member
May 15, 2012
4,307
2,395
136
So here's the facts:

* 14 nm Broadwell brought no major IPC or frequency increase.
* 14 nm Skylake is not expected to do that either.


Based on what expectation? And what is a major IPC increase for you?
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
Its quite simple. The design rule since Core 2 have been that 1% performance increase may only increase the power usage by 0.5%. So anything increasing the performance with 1% or more, and increasing the power consumption with 0.5% or less is being implemented. Else its being discarded.

Who knows what they could do. Hypothetical products are useless. What if ti didnt have IGP, what if it didnt have this and that.

Take a look on the 5960X, how much difference the cache for example makes. A 3/3.5Ghz CPU running in circles around a 4/4.4Ghz. And its not due to more cores.

But again, with the right benchmark Haswell is already 80% faster than IB core for core. And I am sure a Skylake with AVX512 will be as well contra Haswell.

Intel's man-made design rule says nothing about what the restrictions of physics permits them to do in reality. That is regardless of Intel's priorities and market demand.

Intel could just as well have chosen any other arbitrary design rule saying e.g.: "1% performance increase may only increase the power usage by 0.1%". That does not mean Intel could design a CPU that was 100% faster at only 10% higher TDP if they simply wanted to.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
with the right benchmark Haswell is already 80% faster than IB core for core. And I am sure a Skylake with AVX512 will be as well contra Haswell.

Cherry picked specialized metrics is not relevant for determining general CPU performance increase. So you were not being serious after all...
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
Based on what expectation? And what is a major IPC increase for you?

I'm talking about IPC invrease that would result in ~70% yearly CPU performance increase. Since frequency is expected to stand still, that is what it would take to reach 1970--2005 era level of yearly performance increase.

Having said that, even 20-25% IPC increase would be considered major these days. But so far I've not heard about any Skylake uArch changes that are likely to result in that. And even if it did, it would not suffice if it was only a one time leap ahead. They need to be able to sustain that level of yearly CPU performance increase for future CPU generations too for performance to be considered steadily increasing at past rates. Do you see any signs of that happening going forward?
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I'm talking about IPC invrease that would result in ~70% yearly CPU performance increase.

Do I need to remind you the IPC progression over the years?

But again, what you ask for is 500W huge CPUs. Something there isnt buyers for.

Do you also remember CPU prices back then? 1000$ in todays money didnt get you much. A P3 1Ghz would cost you 1400$ today. A P2 400 1150$ and so on.

Would you also be willing to pay the price? Or is it more like 200$?
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Cherry picked specialized metrics is not relevant for determining general CPU performance increase. So you were not being serious after all...

One could also say you use old obsolete code.

There is quite a few factors to consider:
Newer instructions.
Overall IPC increase.
Power consumption.
Die size.
Price.
Frequency.
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
5,146
3,746
136
Welcome in reality.


When any new technology is introduced to the marketplace the rate of advancements will always decrease with time. All of the relatively easy, inexpensive changes that produce the greatest improvement will occur quickly (the low hanging fruit). When the low hanging fruit is gone it's time to get out the ladders and work hard for just a little fruit.

We are over 35 years into x86 development. Looking at the rate of IPC increases over the past 10 years I'd expect there to be 20% TOTAL IPC improvement left in x86. I'm talking about legacy code. The use of new instructions can greatly increase IPC but as we all know they can put them in there but the software developers never seem to really exploit them in the applications.

I'm expecting a 5% IPC gain from Broadwell to Skylake. I'd be surprised if it's 10% and shocked if it's greater than that.

We haven't seen more than 10% since Prescott to Conroe and that was 10 years ago.