Is mainstream desktop CPU development "completed"?

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

piasabird

Lifer
Feb 6, 2002
17,168
60
91
The next technology advance often comes from things other than speed. Storage has always been a pain with computers. First came the SSD, then in several progressions things like USB to USB3, Improvements to the SATA speed, MSATA, Turbo Boost on the i5/7, Centrino Wireless Cards, and the new M.2 interface increased the speed of the SSD. Intel developing an On-die HD Video was nice also. Then there is DDR RAM and maybe a less bulky high end video card. One other area is the up and coming 4k video interface and newer HDTV and Monitors.

There is always room for improvement. I think DVD's and Blu Rays are absolutely archaic at this point. One scratch on the disk and they are toast.
 
Last edited:

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
Here's a little blast from the past.

The IBM PC/XT was released in August of 1981. The PC/AT in August of 1984. The 8088 was announced in July of 1979 and the 80286-6 and 8 were announced in Feb of 1982.

Basically 2-3 years between these two.

The 80286 had 2x the IPC, 16x the memory capacity, and at up to 8Mhz 70% higher clocks than the 8088. It also had a true 16 bit bus vs the 8088's 8 bit bus, and 'protected mode' operation which enabled mult-tasking (and multi-user if you were using unix).


http://tech-insider.org/unix/research/1985/1213.html

* IBM PC/XT 8088-4.77Mhz PCDOS 2.1 Microsoft 3.0 390 427
* IBM PC/AT 80286-6Mhz MSDOS 3.0 Microsoft 3.0 1250 1388


Yes, that's a 400% increase in performance in 2.5 years (using the same C compiler, MS 3.0).

Point being, there really isn't that much movement in this area these days relative to what has happened in the past.

I mean, how far are we from having a consumer i7 that's 400% faster than the i7-2600k at the current rate of performance improvement? 10 more years?
 

CakeMonster

Golden Member
Nov 22, 2012
1,630
810
136
Well more sudden/erratic developments in a small market is quite normal, and that's what it was around 1980. But when computers reached the ubiquity that they did in the early/mid 90s, the billions of investments in research provided us with a more steady rate of improvements. That rate was steady and extremely fast until around 2010/11. Now it seems we are adjusting to another rate, most would say because the market is saturated and that we have reached some physical speed bumps.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
But a 5nm chip with the same die size of current chips? Could easily fit 8+ cores, a strong GPU, the chipset/IO and some HBM on there.

While it is true that xtors double every node power only decreases something like 30%* (it needs to decrease 50% to keep the power consumption the same for any given die size.)

So while this chip you are thinking about may have the xtor budget to dedicate more silicon real estate to CPU and iGPU, power would have to be raised to effectively feed it. If not, there will be the necessity of downclocking various parts under simultaneous load of CPU and iGPU (and possibly management of dark silicon).

*Assuming xtor design and material science is able to keep up. (The transition at TSMC from 28nm planar to 20nm planar is said to reduce power by only 20%.)
 
Last edited:

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
There is a big conceptual speed bump too of course.

In a very real sense mainstream CPU development is 'completed' in terms of getting more than enough performance for what a huge swathe of the population uses their computers for.

To get much more computational power in mainstream CPUs it'll take a big new, power hungry, application area to motivate it. Don't think anything is obvious right now?

Or some massive materials breakthrough that made it 'trivial' to provide huge improvements, although even much of that might well go towards power efficiency instead.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
While it is true that xtors double every node power only decreases something like 30%* (it needs to decrease 50% to keep the power consumption the same for any given die size.)

So while this chip you are thinking about may have the xtor budget to dedicate more silicon real estate to CPU and iGPU, power would have to be raised to effectively feed it. If not, there will be the necessity of downclocking various parts under simultaneous load of CPU and iGPU (and possibly management of dark silicon).

*Assuming xtor design and material science is able to keep up. (The transition at TSMC from 28nm planar to 20nm planar is said to reduce power by only 20%.)

Power doesn't go down as quickly as areal shrinking only because there is a market expectation that clockspeeds will increase.

Otherwise it is a slam dunk to scale voltage and clockspeed so that power scales with areal scaling.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Power doesn't go down as quickly as areal shrinking only because there is a market expectation that clockspeeds will increase.

Otherwise it is a slam dunk to scale voltage and clockspeed so that power scales with areal scaling.

Yes, there is a market expectation clockspeeds will increase at the same core count. So that is what we get with every new generation of quad core.

However, If applying "scale voltage and clockspeed so that power scales with areal scaling" to that strong GPU (not to mention the 8+ cpu cores) SOC Shehriazad wants at 5nm (at the same die size as 22nm) there would have to be very serious downclocking going on in order to keep the same TDP. (5nm chip at the same die size as a 22nm chip would be a ~16x increase in xtors)

Therefore I imagine there would need to be some great breakthrough in xtor design and material science at 5nm or these chips might end up very different than what people expect. Without that breakthrough, I am thinking mostly low power stuff will get integrated and/or the die sizes may be smaller than what we are used to seeing today. Either that or TDPs will swell for SOCs if proportionally more of that real estate is dedicated to higher power consuming things like GPU and CPU.
 
Last edited:

sm625

Diamond Member
May 6, 2011
8,172
137
106
When it comes to cpus, there is one simple fact that stands above all others: If your computer is running too slow, it is most likely due to a bottleneck in one single thread rather than the quantity of threads being insufficient.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Here's a little blast from the past.

The IBM PC/XT was released in August of 1981. The PC/AT in August of 1984. The 8088 was announced in July of 1979 and the 80286-6 and 8 were announced in Feb of 1982.

Basically 2-3 years between these two.

The 80286 had 2x the IPC, 16x the memory capacity, and at up to 8Mhz 70% higher clocks than the 8088. It also had a true 16 bit bus vs the 8088's 8 bit bus, and 'protected mode' operation which enabled mult-tasking (and multi-user if you were using unix).


http://tech-insider.org/unix/research/1985/1213.html

* IBM PC/XT 8088-4.77Mhz PCDOS 2.1 Microsoft 3.0 390 427
* IBM PC/AT 80286-6Mhz MSDOS 3.0 Microsoft 3.0 1250 1388


Yes, that's a 400% increase in performance in 2.5 years (using the same C compiler, MS 3.0).

Point being, there really isn't that much movement in this area these days relative to what has happened in the past.

I mean, how far are we from having a consumer i7 that's 400% faster than the i7-2600k at the current rate of performance improvement? 10 more years?

Kind of pointless to compare 80286 with 8088 instead of 8086. It doesn't have much to do with Intel or their development progress that IBM decided to launch with the cheaper and much more crippled one.

But that effort, going from 8086 to 80286, was likely many orders of magnitude below the effort of going from something in a comparable time frame today, like Sandy Bridge to Haswell. Exponential improvement is not sustainable at the same rates forever.. Back then there was much to be exploited in desktop processors just by gaining a larger transistor budget, reflecting research and mainframe/microcomputer technology that had been developed decades before. Since then many limits have been hit (if not hit hard, are being approached asymptotically with diminishing returns)
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
Is mainstream desktop CPU development "completed"?

When I was younger (or not born yet, depending on my age), calculators, were HUGE, expensive things.

For example, I went into a room (if I was born then), and saw, these massive things:

AnitaMk8_1.jpg


Introduced from about 1961, costing many thousands of dollars.

Over the years, they got smaller and smaller, and cheaper.

At some point (I'm not sure of the exact date), but around 1980 (maybe a bit later), the calculator became, almost exactly, what it is today.

I.e. Small/tiny handheld, very low cost, readily available and LCD screened.

tl;dr
Basic 4 function 8 digit calculators, were sort of finished at about 1980 or so. And have remained pretty much the same (ignoring higher end ones, such as scientific ones, which did progress for longer), some 30 or 35 years later.

But in the case of the calculator, it was "good enough" and "cheap enough", in the 1980s, to continue like that, to this day.

Yes, much more powerful scientific/programmable/specialty calculators can be created. But there is a much smaller market place for them, as most people are happy with a cheap, 4 function 8 digit calculators.

So sadly, desktop computers, especially the ones used as web browsers, email readers, light gaming, gentle office application use etc. Are heading towards the "good enough", and getting ever closer to the "cheap enough", to be commodity items, much like calculators have ended up being.

We are ending up like people who opened up a video tape recorder sales/repair/tape-rental shop, in 1980. And are wondering, why in 2015, no one is borrowing our VHS tapes, buying new VCRs, or bringing in their faulty VCRs anymore. Anyway I've got to go now, someone has entered by Vinyl Record shop. (Joke, maybe).
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
When I was younger (or not born yet, depending on my age), calculators, were HUGE, expensive things.

For example, I went into a room (if I was born then), and saw, these massive things:

AnitaMk8_1.jpg


Introduced from about 1961, costing many thousands of dollars.

Over the years, they got smaller and smaller, and cheaper.

At some point (I'm not sure of the exact date), but around 1980 (maybe a bit later), the calculator became, almost exactly, what it is today.

I.e. Small/tiny handheld, very low cost, readily available and LCD screened.

tl;dr
Basic 4 function 8 digit calculators, were sort of finished at about 1980 or so. And have remained pretty much the same (ignoring higher end ones, such as scientific ones, which did progress for longer), some 30 or 35 years later.

But in the case of the calculator, it was "good enough" and "cheap enough", in the 1980s, to continue like that, to this day.

Yes, much more powerful scientific/programmable/specialty calculators can be created. But there is a much smaller market place for them, as most people are happy with a cheap, 4 function 8 digit calculators.

So sadly, desktop computers, especially the ones used as web browsers, email readers, light gaming, gentle office application use etc. Are heading towards the "good enough", and getting ever closer to the "cheap enough", to be commodity items, much like calculators have ended up being.

We are ending up like people who opened up a video tape recorder sales/repair/tape-rental shop, in 1980. And are wondering, why in 2015, no one is borrowing our VHS tapes, buying new VCRs, or bringing in their faulty VCRs anymore. Anyway I've got to go now, someone has entered by Vinyl Record shop. (Joke, maybe).

The difference is that the companies *could* make much more powerful calculators (at reasonable cost) if there was a market for it. But can they really make much more powerful desktop CPUs (at reasonable cost), if there was a market for it? I doubt it...
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
Kind of pointless to compare 80286 with 8088 instead of 8086. It doesn't have much to do with Intel or their development progress that IBM decided to launch with the cheaper and much more crippled one.

But that effort, going from 8086 to 80286, was likely many orders of magnitude below the effort of going from something in a comparable time frame today, like Sandy Bridge to Haswell. Exponential improvement is not sustainable at the same rates forever.. Back then there was much to be exploited in desktop processors just by gaining a larger transistor budget, reflecting research and mainframe/microcomputer technology that had been developed decades before. Since then many limits have been hit (if not hit hard, are being approached asymptotically with diminishing returns)

In short, mainstream desktop CPU development is "completed"... ? :(
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
The difference is that the companies *could* make much more powerful calculators (at reasonable cost) if there was a market for it. But can they really make much more powerful desktop CPUs (at reasonable cost), if there was a market for it? I doubt it...

They can make them, even now.

But not for single threaded performance.

Give me a Dual processor, (2 x) 18 Core, 36 threads = 36C/72T, desktop computer, and I will be really happy. Until they bring out the 48C/96T one, 6 months later.

You may say, but what about the $15,000 such a computer would cost ?

My answer is, graphics cards, can be very, very powerful, and yet still be reasonably priced. The reason is (I think), because there is still a big market for powerful graphics cards, at reasonable prices.

But the server cpus are massively expensive (for the very powerful ones), because the market is relatively small, compared to many mass market items, such as TVs.
Also the server market can cope with the very high prices, because it is typically sold to businesses, who can afford it, and have a real business need for the potentially huge performance.

If graphics cards, were $15,000 (for a complete gaming system), most gamers could not afford to buy them and/or would buy something else, like a car, if they had that sort of money.

tl;dr
Yes they can do it. (e.g. Use server cpus, as desktop cpus).
But not at a mass market price.

EDIT:
Or to put it another way:

But can they really make much more powerful desktop CPUs (at reasonable cost), if there was a market for it? I doubt it...

If the market for very high end (e.g. 18C/36T), became 100 times bigger. Intel could probably dramatically reduce the price, to say $350. In much the same way that the I7 can be, because it sells in such massive quantities.

If Intel could only sell 100 I7's per year. The price would be massively more than $350.
 
Last edited:

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
I think it's more apparent for those of us that have been into PC's for a long time and remember when large gains were the order of the day. And while I wish it wasn't so as much as the next guy, I don't have much need of anything faster even if there was something at a reasonable cost. The proverbial ball is in the software guys court I think.
And I really don't see much changing till we have some fundamentally different things going on with how we interact with/use computers and/or the internet. I'm not sure what those things should or could be, but I'm pretty sure a keyboard and mouse and a screen, any screen, aren't it.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Kind of pointless to compare 80286 with 8088 instead of 8086. It doesn't have much to do with Intel or their development progress that IBM decided to launch with the cheaper and much more crippled one.

But that effort, going from 8086 to 80286, was likely many orders of magnitude below the effort of going from something in a comparable time frame today, like Sandy Bridge to Haswell. Exponential improvement is not sustainable at the same rates forever.. Back then there was much to be exploited in desktop processors just by gaining a larger transistor budget, reflecting research and mainframe/microcomputer technology that had been developed decades before. Since then many limits have been hit (if not hit hard, are being approached asymptotically with diminishing returns)

Also:

8088: $125
80286: $350

2500K - $200
4570K - $200
 

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
Also:

8088: $125
80286: $350

2500K - $200
4570K - $200

Also,
8088 - 29k transistors on 3.0um, 33mm^2
80286 - 134k transistors on 1.5um, 47mm^2

(2011 used to remove processor graphics)
SB-E 6C i7-3970X - 2.27B @ 32nm, 435mm^2
H-E 8C i7-5960X - 2.6B @ 22nm, 356mm^2

Even outside the architectural changes between the 8088 and the 80286, that chip also got an almost 50% larger die along with a process jump that gave 4x smaller transistors, resulting in 4.6x more transistors being used in the design. In the modern case, the number of transistors only jumps 15% and per core transistor count actually drops.

It's kind of like GPUs went through when the Ti4800 could pull all its power through the AGP bus and five years later the GTX 280 was almost 6x the die size and pulling a couple hundred watts through 6+8pin PCIe connectors. With the early CPUs Intel got to throw die size and transistors at the problem, again doubling transistors and die size going to the 386.

As much as it pains me, I don't think we'll be seeing any 650mm^2 14nm 8C desktop processors any time soon where Intel throws die size and transistors at pushing through single threaded performance as far as possible.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Also,
8088 - 29k transistors on 3.0um, 33mm^2
80286 - 134k transistors on 1.5um, 47mm^2

8088 2.5Watt Power Dissipation

80286 3.3Watt Power Dissipation

3.3/2.5 = 1.32x

We (as consumers) don't allow for the CPU manufacturers to increase power consumption 32% every 2-3 years either.

This is a restriction placed on the desktop market now that was not placed on them "back in the halcyon days" when desktop performance was growing leaps and bounds thanks to an unrestricted power-ceiling that was also allowed to grow by leaps and bounds (back then).

If Intel had to approach the desktop market in 1982 the same as it is required to approach it in 2015, where power consumption has to be flat (or reduced) then those super-duper desktop performance gains back in 1982 would have also never happened.

The market changed the goalposts on what success means in the desktop space, it isn't a valid comparison to come along in 2015 and expect stellar performance gains now in a power-constrained environment that didn't exist back in the days of yore.
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
S-curve. Slow ramp, followed by steep slope, followed by a shallow one. Seems to be a universal phenomenon. Consider jetliners an explosion of them in the 50's, followed by steady size increases. For us consumers ("the flying public") little has changed inside them since the 80's -- except they're more crowded. Yes, innovation goes on. Just as CPU's draw less power, jetliners use less fuel. the innovation continues but with the low-hanging fruit gone, each improvement costs more to make. So we're at the top of the S-curve in both technologies, and nobody likes it. To start a new curve we need a paradigm shift.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Here's a little blast from the past.

The IBM PC/XT was released in August of 1981. The PC/AT in August of 1984. The 8088 was announced in July of 1979 and the 80286-6 and 8 were announced in Feb of 1982.

Basically 2-3 years between these two.

The 80286 had 2x the IPC, 16x the memory capacity, and at up to 8Mhz 70% higher clocks than the 8088. It also had a true 16 bit bus vs the 8088's 8 bit bus, and 'protected mode' operation which enabled mult-tasking (and multi-user if you were using unix).


http://tech-insider.org/unix/research/1985/1213.html

* IBM PC/XT 8088-4.77Mhz PCDOS 2.1 Microsoft 3.0 390 427
* IBM PC/AT 80286-6Mhz MSDOS 3.0 Microsoft 3.0 1250 1388


Yes, that's a 400% increase in performance in 2.5 years (using the same C compiler, MS 3.0).

Point being, there really isn't that much movement in this area these days relative to what has happened in the past.

I mean, how far are we from having a consumer i7 that's 400% faster than the i7-2600k at the current rate of performance improvement? 10 more years?

This would be an appropriate response:

extrapolating.png
 

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
I feel the mainstream CPU development atleast for Intel peaked in 2011 with Sandy Bridge. For AMD it peaked in 2009 with Phenom 2 X4. So after that AMD and Intel should have just stopped anymore R&D on CPU technology and from then on only focused on increasing iGPU power.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
I feel the mainstream CPU development atleast for Intel peaked in 2011 with Sandy Bridge. For AMD it peaked in 2009 with Phenom 2 X4. So after that AMD and Intel should have just stopped anymore R&D on CPU technology and from then on only focused on increasing iGPU power.

Significant parts of the market, have been moving steadily to wanting/needing/liking very low power consumption, and yet retaining as much performance as it can, at the same time.

Especially for mobile/small devices, operating on batteries and wanting to be fan-less and last a very long time on a single charge.

If AMD stuck with the FX-8350 (Hypothetically). They would be the laughing stock (sorry to AMD fans), with its 125 Watt TDP and relative lack of single thread performance.

There may also be significant cost savings (especially in the future, once yield and maybe other things, have settled down), to the manufacturing costs, at smaller and smaller sizes, such as 14nm and 10nm. Giving the manufacturers the option to reduce the selling price of some/all their cpu offerings.

Also server chips, love to be VERY power efficient, because it minimizes running costs (they often run 24/7, for years on end). And at the same time, dramatically increases the maximum number of on die cores. Assuming the main constraint, is a desired maximum practical TDP.

E.g. If 145 Watts, is the highest TDP you want. As the power efficiency improves, more and more cores can fit on it. I am very impressed that Intel can fit an amazing 18 cores, onto only 145 Watt TDP.

If AMD and Intel, froze their development programs. Then sooner or later, Arm (and maybe other competitors), may meet, or even beat their cpu range. If that was too happen, it would be far too late, for Intel or AMD to react.

tl;dr
It could end the company, as the competition, could potentially over take them.
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
8088 2.5Watt Power Dissipation

80286 3.3Watt Power Dissipation

3.3/2.5 = 1.32x

We (as consumers) don't allow for the CPU manufacturers to increase power consumption 32% every 2-3 years either.

The problem is that even if the CPU manufacturers would add 32% TDP in a 2-3 year timeframe, they would nowadays not come anywhere near the level of CPU increase we saw going from 8088->80286.

And not near the performance increase we saw in any 2-3 year period during the ~1970--2005 era either. :( Also, it's no small difference in CPU performance increase either between now and then. The yearly performance increase rate is probably 5-10 times or so lower today (depending on which 2-3 year period of that era you compare it to).
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
The problem is that even if the CPU manufacturers would add 32% TDP in a 2-3 year timeframe, they would nowadays not come anywhere near the level of CPU increase we saw going from 8088->80286.
Well, don't say that, they could push for example the I7-4470 from 658
http://hwbot.org/submission/2516255_a7154ka_cinebench___r15_core_i7_3770_658_cb
all the way up to 1329,or any number in between
http://hwbot.org/submission/2784147_dancop_cinebench___r15_core_i7_4770k_1329_cb
"just by" increasing the TDP (FX-9590 style) ,the thing is if its viable as a business model or not since people would probably avoid them like the plague.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
The problem is that even if the CPU manufacturers would add 32% TDP in a 2-3 year timeframe, they would nowadays not come anywhere near the level of CPU increase we saw going from 8088->80286.

You know this how?