Why are desktop CPUs so slow at improving?

  • Thread starter DeletedMember377562
  • Start date
D

DeletedMember377562

Over the years, the question about why desktop CPUs stopped improving was always answered with "architectures cannot become better". And that didn’t make much sense, seeing as every other segment managed to reap the benefits of smaller process nodes to get better performance (through increased clock speeds) and efficiency, which have been somewhat proportional to the shrinking die sizes. For example, going from 28nm to 14nm led to 70% increased performance in the form of clock speeds and extra transistors on the GPU side. We have seen similiar benefits on smartphone SoCs.

So why isn’t this the case with desktop CPUs? In theory, Intel ought to have been able to increase the transistor amount by a lot from 32nm to 14nm, from Sandy Bridge to Skylake/Kaby Lake, etc. But for some reason they haven’t. Many would-be experts on forums like Anandtech, Techpowerup, etc. always told me it was about technological limitations. What about the efficiency then? Efficiency hasn’t scaled by lower process nodes from SB to SL on the same clock speeds in any significant amount, as opposed to on mobile SoCs, laptops, GPUs and more. Why?

I remember reading somewhere that Apple’s A10 at 14nm (which is inferior to Intel’s 14nm process) had 3.3 billion transistors. Sure, the SoC includes not just CPU/GPU, but even with that in mind, it still can’t be enough to count for the clear advantage over the Skylake with its 1.75 billion transistors. The A11 chip isn’t just much smaller in size, but houses way more transistors. Why and how?

The classic argument was about Apple not being anywhere near as powerful as Intel in performance, and that they would ground to a halt. Which they clearly didn’t, and started matching Intel already with A9/A10 architecture. Then the argument shifted to “but multithreaded performance is still way below Intel”. Another claim that has now been debunked with the A11. Geekbench scores:


A11 (6c/6t@ 2.4 GHz):
Single core: 4200
Multi-core: 10,000


i7-8650U @2.11 GHz (4c/8t)
Single-core: 3900
Multi-Core: 13,000


Remember one is a much smaller 4W chip, the other is 15W larger chip. Also, the 8560U has a turbo speed that goes all the way up to 4.0 GHz. Clearly, Apple has managed to produce a much, much better architecture here. Even if we were to take sustained performance into the mix, it’s pretty reasonable to think that the Hurricane cores scaled to laptop size and 15W TDP, would be able to solve this.

Also, remember that this is on the mobile front, where performance has increased 30-40% every single generation. Their GPU and CPUs keep improving, from year to year. Intel's has taken advantage of higher clock speeds and turbo boosts for improved performance over the last few years, and it's clearly reaching a wall at 4 GHz. Meanwhile, ARM, Samsung and Apple keep improving ~30% year-on-year -- not only through clock speeds, but also substantial improvements in IPC.
 
Last edited by a moderator:

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
Oh Intel and AMD could greatly improve their clockspeeds quite a bit if most of the end users are willing to deal with high TDPs and and needing to to use water cooling to keep the CPUs from burning up. And not to mention paying the higher prices this would cost.

However most people are not willing to do this so....
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
You are comparing apples to oranges with your chip selection, imo.

Mobile phones change and evolve very rapidly. The market has quite a turnover rate for having the newest thing.

The reason X86 desktop/mobile chips haven't improved is lack of competition and a lack of need for any improvement, imo.

Also a coming lack of need for a desktop PC.

The big improvement we saw was from AMD, which is now hot on the heels of Intel in the desktop/mobile cpu department.

Unfortunately, that's not an overall improvement in the arena.

I'm not sure there is an application other than gaming that needs much improvement in the desktop PC arena? And that needed more cores mostly, which we now have.

If you want to surf the net, watch YouTube, do your homework, and play most games, you can do that just fine with a few years old CPU.

With discrete GPUs we seem to have a quicker turnover, more of a need/want for the latest, so those have moved along fairly well.
 
  • Like
Reactions: gOJDO_n and whm1974

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
And don't forget, all of the low hanging fruit has been picked cleaned for desktop CPUs.
 
D

DeletedMember377562

You are comparing apples to oranges with your chip selection, imo.

Mobile phones change and evolve very rapidly. The market has quite a turnover rate for having the newest thing.

The reason X86 desktop/mobile chips haven't improved is lack of competition and a lack of need for any improvement, imo.

Also a coming lack of need for a desktop PC.

The big improvement we saw was from AMD, which is now hot on the heels of Intel in the desktop/mobile cpu department.

Unfortunately, that's not an overall improvement in the arena.

I'm not sure there is an application other than gaming that needs much improvement in the desktop PC arena? And that needed more cores mostly, which we now have.

If you want to surf the net, watch YouTube, do your homework, and play most games, you can do that just fine with a few years old CPU.

With discrete GPUs we seem to have a quicker turnover, more of a need/want for the latest, so those have moved along fairly well.


That still doesn't answer the question of why lower process nodes hasn't lead to bigger improvements on Intel CPUs in a similiar fashion as other segments. It's not like Intel stopped making processors; they sell new chips every year. Why is the transistor count so low, even at such low process nodes? And if they keep the process nodes the same, what about efficiency? 7700K is not much more efficient than 2600K @ 4.5 GHz, as 32nm -> 14nm ought to give.

And don't forget, all of the low hanging fruit has been picked cleaned for desktop CPUs.

This excuse ought to be dead by now. Apple is already surpassing Intel in CPU architecture, when comparing their 4W mobile chips to Intel 4W CPUs -- or even Intel's 15W chips. ARM architectures applied by Samsung, Qualcomm, MediaTek and others are also not far behind.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,981
136
It's a confluence of several factors.

One major one is that CPUs hit a point where they were more than good enough for a lot of users. A 2700k from almost 7 years ago now is still going to be good enough for someone who's doing web browsing, some light productivity work, and watching videos on YouTube or Netflix. Contrast this with the 90's and early 00's where even those users could see massive benefits from increased CPU performance and there's just not as much market demand to keep driving performance in the same way. If a person is running NoScript and an ad blocker, there's even less need for a new CPU as most of what taxes them these days are utterly unnecessary web bloat and similar cruft.

A second factor is that there's been a general migration towards notebooks over time which means prioritizing efficiency over performance when designing CPUs. This is tied in with the first factor as the general performance is good enough, so now the goal is to get that performance for as little cost in terms of power as possible in order to get these chips into ever smaller and thinner notebooks. If you remember back far enough, all notebooks were beastly and heavy by modern standards. Even Apple's old laptops typically weighed well over 5 lbs. Unless you're selling something as a desktop replacement notebook, I don't think you'll find anything that heavy these days outside of a few gaming notebooks with huge screens and desktop GPUs crammed into them.

A third factor is that for a five year stretch, Intel had no real competition in the market. AMD had chips, but they were relegated to the bargain bin or only used by hardcore fans of the company. Intel didn't bother making any major change to their architecture and have simply been tweaking it since Nehalem. This worked fine during the first few iterations like Sandy and Ivy Bridge where they could still get some double digit IPC gains, but has fallen off heavily with the most recent generations not having much left to squeeze out of the architecture. However, Intel had no incentive to push harder since there was no competition. They seem to have stepped up their game considerably since Zen, so perhaps we'll see a return to more year-over-year improvement from them.

A final factor is that updates to process technology have slowed over time. Moore's law is probably closer to 24-28 months at this point instead of the original 18 months. Some of this goes back to less demand for performance improvements in consumer CPUs, which means the best approach for CPU manufacturers is to make smaller CPUs that deliver the same level of performance (at lower power cost) as previous CPUs. Another part of this is that the cost to develop new nodes is increasing at a rate that outstrips demand, so there's less of a push to get to those new nodes, and when you do get there, it's often more economical to make small chips for better yields.

Take all of those (and probably a few other things that also factor in) together and you get the situation as it exists today. If you look at the mobile ARM market, you see vastly different factors. The performance wasn't good enough for most consumers, the chips were already small and could increase in size without making them too difficult to fab in quantity, and there were several companies all competing against each other to offer the best performance to an expanding market that was more than willing to replace their device every two to three years. It's little wonder that there were such large performance gains.
 
D

DeletedMember377562

It's a confluence of several factors.

But that still doesn't explain how much superior the A11 is over comparable Intel chips at 4W (which has gone completely against the predictions many would-be experts on these forums made about mobile SoCs stopping performance improvements). Mobile chips haven't shown much sign of grinding to a halt either, with even more expected improvements several years down the road. The 4W mobile architectures that already exist today rival and outmatch Intels 4W and 15W chips, and it makes me wonder how much better a dedicated desktop CPU from Apple or ARM would have been.
 

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
That still doesn't answer the question of why lower process nodes hasn't lead to bigger improvements on Intel CPUs in a similiar fashion as other segments. It's not like Intel stopped making processors; they sell new chips every year. Why is the transistor count so low, even at such low process nodes? And if they keep the process nodes the same, what about efficiency? 7700K is not much more efficient than 2600K @ 4.5 GHz, as 32nm -> 14nm ought to give.



This excuse ought to be dead by now. Apple is already surpassing Intel in CPU architecture, when comparing their 4W mobile chips to Intel 4W CPUs -- or even Intel's 15W chips. ARM architectures applied by Samsung, Qualcomm, MediaTek and others are also not far behind.
It's not an excuse. X86 is pretty much out of breathing room for IPC gains. While we will still see some improvement in that area, most of gains will be higher core and thread counts at the same clockspeeds and maybe slightly lower TDPs.

ARM and other RISC architectures like RISC-V still have plenty of breathing room for IPC and clockspeed gains.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,414
8,356
126
But that still doesn't explain how much superior the A11 is over comparable Intel chips at 4W (which has gone completely against the predictions many would-be experts on these forums made about mobile SoCs stopping performance improvements).
again, you need to know and understand exactly what geekbench tests and how before you can determine what exactly, if anything, intel is deficient at. apparently, the previous version of geekbench included SHA1 as a large part of the integer testing. all you need to have a huge score in that is a small fixed-function unit. apple had it, intel didn't. also, seems like a lot of the tests were extremely cache friendly, so didn't test the greater memory subsystem at all.
 

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
again, you need to know and understand exactly what geekbench tests and how before you can determine what exactly, if anything, intel is deficient at. apparently, the previous version of geekbench included SHA1 as a large part of the integer testing. all you need to have a huge score in that is a small fixed-function unit. apple had it, intel didn't. also, seems like a lot of the tests were extremely cache friendly, so didn't test the greater memory subsystem at all.
And besides, shouldn't a reviewer use multiple benchmarks to test performance anyway?
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,981
136
But that still doesn't explain how much superior the A11 is over comparable Intel chips at 4W (which has gone completely against the predictions many would-be experts on these forums made about mobile SoCs stopping performance improvements).

ARM is much more suited to low power solutions and isn't dragging around a 30+ year old architecture, so it's going to have an edge in this niche. That Intel does as well as it actually does in those market segments is a testament to their engineers.

Also, please find me some posts from people who are claiming that SoC performance would stop. I seriously doubt these exist in any quantity or are coming from anyone that the regulars here treat as an expert.
 
  • Like
Reactions: whm1974

VirtualLarry

No Lifer
Aug 25, 2001
56,327
10,035
126
If a person is running NoScript and an ad blocker, there's even less need for a new CPU as most of what taxes them these days are utterly unnecessary web bloat and similar cruft.
This really cannot be overstated, IMHO. "web bloat" (ads!), is probably the biggest reason for a person thinking that they need a newer, more powerful, many-threaded CPU. (That, and newer AAA games, if that person is a "Gamer".)

Run a good ad-blocker, and a script-blocker, and a web-beacon-blocker (eliminates the extensive RTT for fetching web beacons on every page load), and your internet experience becomes much more pleasurable. Granted, what remains to be transferred and rendered, still takes SOME CPU, but it lightens the load, enough that prior-gen CPUs can still (mostly) keep up fine.
 

Burpo

Diamond Member
Sep 10, 2013
4,223
473
126
Just look at Op's post history. His post are close minded, opinionated, insulting, and even profane (warned numerous times). Save your breath on this one..
 
  • Like
Reactions: dlerious

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
I have been wondering how improvement we can get out of x86 if we went to a true 5nm process? Maybe something with 8 cores with a much higher IPC gain at a 3600 Mhz and 50W TDP?
 

traderjay

Senior member
Sep 24, 2015
220
165
116
This really cannot be overstated, IMHO. "web bloat" (ads!), is probably the biggest reason for a person thinking that they need a newer, more powerful, many-threaded CPU. (That, and newer AAA games, if that person is a "Gamer".)

Run a good ad-blocker, and a script-blocker, and a web-beacon-blocker (eliminates the extensive RTT for fetching web beacons on every page load), and your internet experience becomes much more pleasurable. Granted, what remains to be transferred and rendered, still takes SOME CPU, but it lightens the load, enough that prior-gen CPUs can still (mostly) keep up fine.

When I deployed SOPHOS UTM and Pi-hole adblock on my network, the new experience is like nirvana. Hard to describe with words, but its similar when I first bought a GPU with T&L engine haha
 

Thunder 57

Platinum Member
Aug 19, 2007
2,674
3,796
136
I can't help but feel that this is more of a "Crap on x86" thread than one seeking actual discussion.

ARM has it's niche and x86 it's own. There is some crossover, naturally. You also can't just use one benchmark to make blanket statements. Even without serious competition, Intel made huge gains in performance/watt between Sandy and Skylake. That is what Intel was focusing on, not absolute performance, as they already had plenty. Also, mobile is/was the priority.

x86 is still very popular for many reasons. It is not going anywhere any time soon. Recent years have reminded me of the 90's when people thought x86 was on its way out in favor of RISC. It didn't happen then. It's too soon to tell if it happens now. I am very curious to see how the new Windows ARM works out. With key components and software compiled for ARM and running less resource heavy software through emulation (or whatever it is), it may be interesting. There will be a power penalty for that on the fly x86 to ARM conversion. These are interesting times.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,674
3,796
136
...7700K is not much more efficient than 2600K @ 4.5 GHz, as 32nm -> 14nm ought to give...

That's just a blatant lie, or misinformation at best.

https://www.anandtech.com/bench/product/287?vs=1826

The 7700k blows the doors off of the 2600k in anything non gaming (GPU bound). And how efficient do you think that 2600k is at 4.5GHz? Even without adding voltage, you are increasing the power usage a good bit (over a 1GHz overclock).
 

Excessi0n

Member
Jul 25, 2014
140
36
101
Over the years, the question about why desktop CPUs stopped improving was always answered with "architectures cannot become better". And that didn’t make much sense, seeing as every other segment managed to reap the benefits of smaller process nodes to get better performance (through increased clock speeds) and efficiency, which have been somewhat proportional to the shrinking die sizes. For example, going from 28nm to 14nm led to 70% increased performance in the form of clock speeds and extra transistors on the GPU side. We have seen similiar benefits on smartphone SoCs.

Graphics is an example of an embarrassingly parallel problem, so you can always add more performance by piling on more cores. That isn't the case for most of the things that CPUs are used for, so endlessly piling on transistors doesn't always help. GPUs are also different in that they have drivers, which can allow programs to make use of many new architectural features even if they didn't exist when the program was made. Something running on a CPU, however, will never be able to use instructions that it wasn't compiled for. And if you recompile it to take advantage of new instructions, then it won't run on older machines (without additional work, anyways).
 
  • Like
Reactions: tincmulc

trparky

Junior Member
Mar 2, 2008
14
0
76
Something running on a CPU, however, will never be able to use instructions that it wasn't compiled for.
That is probably one one thing that is holding us back, backwards compatibility with older hardware and software. Unfortunately backwards compatibility has become a boat anchor tied around all of our necks. If we are to make any advancements from this point on we're going to have to trim the fat and get rid of the backwards compatibility in many of our programs and operating systems.
 

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
That is probably one one thing that is holding us back, backwards compatibility with older hardware and software. Unfortunately backwards compatibility has become a boat anchor tied around all of our necks. If we are to make any advancements from this point on we're going to have to trim the fat and get rid of the backwards compatibility in many of our programs and operating systems.
RISC-V anyone?
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
If you want to surf the net, watch YouTube, do your homework, and play most games, you can do that just fine with a few years old CPU.

With discrete GPUs we seem to have a quicker turnover, more of a need/want for the latest, so those have moved along fairly well.
QFT.

My wife does her research on the net, sends email, etc. She uses a computer from 2008. The only thing I've done with it was to swap her 1/2 TB HD for a 1/2 TB SSD. Now she flies as fast as she wants. It's the Internet that slows her down, not her machine.
 
Last edited:

whm1974

Diamond Member
Jul 24, 2016
9,460
1,570
96
QFT.

My wife does her research on the net, sends email, etc. She uses a computer from 2008. The only thing I've done with it was to swap her 1/2 GB HD for a 1/2 GB SSD. Now she flies as fast as she wants. It's the Internet that slows her down, not her machine.
Maybe you should have install ad blocking software? Anyway I use Privacy badger and it works fairly well, and made my web browser much faster.