Why are desktop CPUs so slow at improving?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ehume

Golden Member
Nov 6, 2009
1,511
73
91
Maybe you should have install ad blocking software? Anyway I use Privacy badger and it works fairly well, and made my web browser much faster.
Thanks for that. I just looked them up -- not all extensions made the transition. I installed the extension. We'll see how it goes.
 

whm1974

Diamond Member
Jul 24, 2016
9,436
1,571
126
Thanks for that. I just looked them up -- not all extensions made the transition. I installed the extension. We'll see how it goes.
What is nice about Privacy Badger, is you can turn it off per website on the fly, and it will still be on for others.
 

Dave3000

Golden Member
Jan 10, 2011
1,520
114
106
I don't think that CPU's have been slow at improving from an overall performance standpoint. If games were programmed to take full advantage to as many cores as you have available in your system, games would be performing much better on 6-core systems than 4-core systems even when with the 4-core systems have a little higher clock speeds and IPC. A i7-2600k is much slower than an i7-8700k assuming a program or games takes full advantage of 6 cores with hyperthreading. However from an IPC standpoint, I think progress is slow in the CPU department since the Sandybridge generation.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
That's just a blatant lie, or misinformation at best.

https://www.anandtech.com/bench/product/287?vs=1826

The 7700k blows the doors off of the 2600k in anything non gaming (GPU bound). And how efficient do you think that 2600k is at 4.5GHz? Even without adding voltage, you are increasing the power usage a good bit (over a 1GHz overclock).

This. I have a 2600K @ 4.4GHz for daily use, I can bench it at 4.5GHz or even 4.6GHz but efficiency goes right out the window, its not worth the extra volts and high temps to gain a negligible amount of performance.

Even at 4.4GHz it probably draws >130W, a 7700K can probably do 4.5GHz at stock volts and pull approx 65W, so thats approx a 150% increase in efficiency considering a 7700K is also significantly faster clock for clock vs a 2600K
 

mikeymikec

Lifer
May 19, 2011
20,997
16,243
136
QFT.

My wife does her research on the net, sends email, etc. She uses a computer from 2008. The only thing I've done with it was to swap her 1/2 TB HD for a 1/2 TB SSD. Now she flies as fast as she wants. It's the Internet that slows her down, not her machine.

Unless her CPU is a C2Q (probably OC'd) or you've got a horrendously slow Internet connection, I have to say, you've got to be wrong there. Fire up Task Manager while loading some sites and see if the CPU usage hits 100% for more than a second.

Don't get me wrong, she might be fine with its level of performance; for example I was browsing the other day on a WinXP and DDR2-era mobile Celeron (about 1.4GHz IIRC) belonging to a customer. Other customers still have AM2 Athlon 64 X2 builds from 2008-2009, and I'm sure they're fine with the level of performance they get, but considering that my parents' laptop (mobile C2D with discrete GPU) is definitely showing signs of not keeping up despite the SSD I recently put in it, and that I've upgraded a good few customers from Athlon II X2 CPUs to X4s and they've been happy with the results, web page designs are definitely more demanding than they were ten years ago.
 

imported_bman

Senior member
Jul 29, 2007
262
54
101
The big thing you are leaving out of your analysis is that the relationship between single threaded performance and power is non-linear, with diminishing returns in performance as the power increases. That means that gains in efficiency due to node transitions will be marginal at the top of curve (~90W Desktop part) compared to the bottom of the curve (1W-15W Mobile parts).

The upshot is that we have seen a significant decrease in the amount of power required to get the performance of the 2600K. Thus notebooks, AIOs, and micro/fanless PC are the segments that have seen significant improvements in performance over the last few years. The 35W 7700T outperforms the 95W 2600K, see https://www.anandtech.com/bench/product/1850?vs=287 (scroll past the gaming tests to the CPU benchmarks). I can't find it now, but there was a site that compared Intel's 8th Gen 15W-25W U processors vs the 2600K and when the U processors were not throttling they were outperforming the 2600k.
 

trparky

Junior Member
Mar 2, 2008
14
0
76
This. I have a 2600K @ 4.4GHz for daily use, I can bench it at 4.5GHz or even 4.6GHz but efficiency goes right out the window, its not worth the extra volts and high temps to gain a negligible amount of performance.

Even at 4.4GHz it probably draws >130W, a 7700K can probably do 4.5GHz at stock volts and pull approx 65W, so thats approx a 150% increase in efficiency considering a 7700K is also significantly faster clock for clock vs a 2600K
But in a desktop who gives a crap about power usage, it's a desktop with power coming from the wall outlet. Now if it's a notebook then by all means we need to worry about power usage because of battery life.
 

scannall

Golden Member
Jan 1, 2012
1,960
1,678
136
But in a desktop who gives a crap about power usage, it's a desktop with power coming from the wall outlet. Now if it's a notebook then by all means we need to worry about power usage because of battery life.
I do worry about that some. I prefer quiet and cool, and hot chips take more cooling and noise.
 

trparky

Junior Member
Mar 2, 2008
14
0
76
I do worry about that some. I prefer quiet and cool, and hot chips take more cooling and noise.
Yes, I understand that... to an extent, but if we're sacrificing performance to get better temps then no, that's not cool.
 

whm1974

Diamond Member
Jul 24, 2016
9,436
1,571
126
Yes, I understand that... to an extent, but if we're sacrificing performance to get better temps then no, that's not cool.
If you want a CPU with a 200+ watt TDP Well I supposed that is your choice, but most of us don't. Personally I would be happy with 65w TDP as that will net me enough performance without requiring fancy cooling.
 

dullard

Elite Member
May 21, 2001
26,042
4,689
126
But in a desktop who gives a crap about power usage, it's a desktop with power coming from the wall outlet. Now if it's a notebook then by all means we need to worry about power usage because of battery life.
  • Anyone who pays the power bill and wants to keep it low.
  • Anyone who's office is ~5° hotter than the rest of the house because computers put out so much heat and who wants to be more comfortable.
  • Anyone who puts a computer inside a computer desk/cabinet and doesn't want the CPU downthrottling to base clocks all the time.
  • Anyone who has a small form factor PC that can't provide that power continuously.
  • Anyone who has an HTPC or sound recording computer and wants there to be little to no fan noise.
  • Anyone who runs a server that is difficult/expensive to keep cool (since server CPUs are quite similar to desktop CPUs but usually with more cores).
  • Anyone who wants a cheaper computer that doesn't need to resort to exotic cooling.
  • Etc.
Don't over assume that everyone is exactly like you.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
  • Anyone who pays the power bill and wants to keep it low.
  • Anyone who's office is ~5° hotter than the rest of the house because computers put out so much heat and who wants to be more comfortable.
  • Anyone who puts a computer inside a computer desk/cabinet and doesn't want the CPU downthrottling to base clocks all the time.
  • Anyone who has a small form factor PC that can't provide that power continuously.
  • Anyone who has an HTPC or sound recording computer and wants there to be little to no fan noise.
  • Anyone who runs a server that is difficult/expensive to keep cool (since server CPUs are quite similar to desktop CPUs but usually with more cores).
  • Anyone who wants a cheaper computer that doesn't need to resort to exotic cooling.
  • Etc.
Don't over assume that everyone is exactly like you.
Well, you've always had options of lower power rated cpus for those applications where the heat and/or power is a real problem.

Bleeding edge folks generally aren't too worried about the TDP of their CPU if it's the fastest thing out there for the job at hand.

And there are always the folks who try to walk on top of the fence and get low power, high performance, and low heat all at once.
 

nOOky

Diamond Member
Aug 17, 2004
3,262
2,347
136
Enthusiasts are a very small percentage of people that actually give a crap about these things in normal everyday usage. I would call my wife a "typical" user, she just browses the internet, uses Office applications, and thinks that when she wants to do something she should not have to jump through hoops. Pure internet speed itself would be the biggest limiting factor in most people's systems these days. My wife will bang oh her keyboard and proclaim "this computer is so slow" when in fact it is always the internet connection.

I would say you can make a case for an older dual core PC to be considered plenty fast enough for the average user. I would venture to guess that most people that use special applications or programs may learn about CPU/GPU speed and how it may help them (say video or photo editors for example) but if you think about the average user they never go into Best Buy and ask "how many watts does this draw at the wall, how much heat and noise does it put out when it's under my desk, and how many FPS does it run in Farmville at 4k?

Now enthusiasts like to bicker endlessly on the internet about how much these things matter. I have an older E2160 system sitting by my shiny new rig, and for most internet and Office applications I can't tell the difference. The only way I'd know would be to run benchmarks so I could say "aha it's 2.3 seconds faster in Winrar unzipping the game patch".
 

dullard

Elite Member
May 21, 2001
26,042
4,689
126
Well, you've always had options of lower power rated cpus for those applications where the heat and/or power is a real problem.

Bleeding edge folks generally aren't too worried about the TDP of their CPU if it's the fastest thing out there for the job at hand.
There certainly are a significant number of bleeding edge people who don't care about power usage. However, that is still not that large of a group of people. They are vocal and willing to fork over a lot of money, so they shouldn't be ignored. I just thought that the "who gives a crap about power usage" is way over emphasizing this minority.

To really nitpick, Coffee Lake won't have lower power CPUs for months.
 

whm1974

Diamond Member
Jul 24, 2016
9,436
1,571
126
There certainly are a significant number of bleeding edge people who don't care about power usage. However, that is still not that large of a group of people. They are vocal and willing to fork over a lot of money, so they shouldn't be ignored. I just thought that the "who gives a crap about power usage" is way over emphasizing this minority.

To really nitpick, Coffee Lake won't have lower power CPUs for months.
I don't know the i7-8700 is 6 cores/12 threads at 3200Mhz with 65w TDP.
 

dullard

Elite Member
May 21, 2001
26,042
4,689
126
I don't know the i7-8700 is 6 cores/12 threads at 3200Mhz with 65w TDP.
The T-line (35 W) is missing for now in Coffee Lake (such as the 6700T and 7700T). None of the low-50 W parts are there (such as the 6300 and 7300).

Even arguably the best price/performance 65 W part (6600 and 7600) is missing. In Skylake a measly $31 more dollars will buy you an additional 500 MHz CPU speed (6600 vs 6400) all for the same TDP. In Kaby Lake $42 bought you 600 MHz CPU speed (7600 vs 7400). In Coffee Lake, right now, you have to pay $121 more for 500 MHz (8700 vs 8400). The sweet-spot of massive more speed for a small amount of money and no additional TDP is missing (the 8600).
 
D

DeletedMember377562

I can't help but feel that this is more of a "Crap on x86" thread than one seeking actual discussion.

.

It wasn't. I was seeking answers to several question, and still feel I haven't gotten them. Like my question about why a large desktop processor has fewer transistors than a small smartphone SoC like the A11, even if we look at just the CPU and GPU side. I also don't agree that Sandy Bridge to Skylake led to huge performance/watt improvements. 32nm to 14nm ought to have led to larger improvements in this area on desktop processors. GPUs and mobile CPUs gained way more performance/watt by reducing their die size than Intel did, I feel.

You also fail to explain to me why the A11 manages to actually match Intels' i7 i7-8650U, despite being only 4W. Some guy here talked about Geekbench not being a sufficient enough comparison, but the A11 excels in other benchmarks as well -- you are free to check them out. Sure, the A11 doesn't have better sustained performance (8650U isn't as good in this regard either, tbh), but that's become of the restraints put on the smartphonef form factor. Put it into a laptop, and it's a whole different store. There's also no reason for this progression to stop, as future Apple SoCs will continue to improve in performance. I would also assume a 15W version of their architecture would be even more powerful as well.
 

whm1974

Diamond Member
Jul 24, 2016
9,436
1,571
126
It wasn't. I was seeking answers to several question, and still feel I haven't gotten them. Like my question about why a large desktop processor has fewer transistors than a small smartphone SoC like the A11, even if we look at just the CPU and GPU side. I also don't agree that Sandy Bridge to Skylake led to huge performance/watt improvements. 32nm to 14nm ought to have led to larger improvements in this area on desktop processors. GPUs and mobile CPUs gained way more performance/watt by reducing their die size than Intel did, I feel.

You also fail to explain to me why the A11 manages to actually match Intels' i7 i7-8650U, despite being only 4W. Some guy here talked about Geekbench not being a sufficient enough comparison, but the A11 excels in other benchmarks as well -- you are free to check them out. Sure, the A11 doesn't have better sustained performance (8650U isn't as good in this regard either, tbh), but that's become of the restraints put on the smartphonef form factor. Put it into a laptop, and it's a whole different store. There's also no reason for this progression to stop, as future Apple SoCs will continue to improve in performance. I would also assume a 15W version of their architecture would be even more powerful as well.
With AMD back in the game, I'm sure will see bigger improvements in x86.
 

dullard

Elite Member
May 21, 2001
26,042
4,689
126
It wasn't. I was seeking answers to several question, and still feel I haven't gotten them. Like my question about why a large desktop processor has fewer transistors than a small smartphone SoC like the A11, even if we look at just the CPU and GPU side...
There's also no reason for this progression to stop, as future Apple SoCs will continue to improve in performance. I would also assume a 15W version of their architecture would be even more powerful as well.
You are asking about RISC vs CISC. There are plenty of articles on that difference and you would do better to read a good article than to ask here about generalities.

The A11 is very close to RISC (reduced instruction set computer). The name itself implies that it is a simple design. Also the A11 is a System on a Chip and not just a processor. Much of the SoC components (graphics, memory, etc) are low-complexity and easy to pack densely. Finally the A11 is built on 10 nm vs the i7-8650U which despite the "8" in the first digit is a generation old Kaby Lake processor on 14 nm.

Combine the simple design (RISC) with simple repeatable components (GPU, memory), and a leading-edge 10 nm process and of course the A11 packs in a lot of transistors in a small space.

You are comparing that A11 to a CISC processor (complex instruction set computer), on an old generation 14 nm process. Not only that but the x86 processor has 40 years of instruction baggage that it has to carry along to support any x86 program ever written (the A11 gets to make a clean break with each release). This baggage just keeps increasing as an x86 CPU has to do everything well (from notebooks, to computers, to workstations, to servers) while the A11 just has to run one phone with usually simple software well. So of course the x86 transistors are less dense. It is harder to stuff the same number of elephants in a room as lego bricks into the same room.

As for performance, it depends on the types of commands needed. Put a simple instruction into a benchmark and run it repeatedly and the A11 will shine. Make a benchmark that heavily uses the complex instructions and the A11 will wimper and die.

As to why CPUs are hard to keep improving, they are improving. It is just that the low hanging fruit is all picked. Frequencies are at brick wall. Core counts can only help so much and only in limited applications. Even just throwing more transistors at the problem is difficult as Intel is struggling with 10 nm. Combine that with what others have said: CPU sales pale in comparison to mobile phones and CPU are good enough performance for almost everyone. Thus there isn't much demand to throw more money into CPUs to solve those difficult problems.
 
Last edited:

imported_bman

Senior member
Jul 29, 2007
262
54
101
I also don't agree that Sandy Bridge to Skylake led to huge performance/watt improvements. 32nm to 14nm ought to have led to larger improvements in this area on desktop processors. GPUs and mobile CPUs gained way more performance/watt by reducing their die size than Intel did, I feel.

The 14nm 7700T at 35W outperforms the 32nm 2600K at 95W. That is over a 3x improvement in terms of performance/watt once you factor in the performance gains of the 7700T and is pretty close to the metrics Intel advertised (32nm->22nm ~0.5x power, 22nm->14nm ~0.7x power, 14nm->14+nm ~0.85x power). Of course GPUs are going to do far better from node improvements because graphics workloads can be processed using massive parallelism, thus performance can scale close to linear just from adding transistors.

As for the question you are asking see: http://www.lighterra.com/papers/modernmicroprocessors/

The basic break down is the following. With CPUs you can add more cores but Amdahl's law comes into effect, so outside of video rendering, scientific/engineering applications, and maybe a handful of games (more recently) little is gained by adding more cores. Another option is to add more transistors for greater instruction level parallelism (IPL) or more out of order flights but these solutions hit walls and exponentially increase in complexity and cost for diminishing returns as they scale up. Desktop x86 CPUs have picked the low hanging IPL fruit whereas ARM based CPUs had plenty to pick to over the last few years, so don't expect the uplift in the performance of the AX series of processor to continue indefinitely on the IPC front once this IPL fruit has been picked on the ARM side. The one area where desktop CPUs have significantly improved their performance by adding more transistors over the last few years has been with more powerful vector units, the issue is that these vector units require developers to take advantage of them and that in most applications the ability to exploit them is limited.
 
Last edited: