CPU performance improvements over last 15-20 years

Timmah!

Golden Member
Jul 24, 2010
1,560
912
136
Seeing all those fancy Threadripper Cinebench results got me thinking how much it is faster than my current CPU, which i bought just last year and subsequently how much actually CPUs improved since my first own computer, which was IIRC sometimes in 2000.

Not really sure, what would be the best benchmark app to measure this performance increase, since as we all know there are different kinds of workloads and whatnot, new instructions introduced over time, different operating systems, etc... etc... so i am sticking to Cinebench as measurement tools, as i can find results for all the CPUs i owned bar the very first one and it seems to scale well with both additional cores and frequency...

not to mention i do actual rendering for living (partially), although using different renderer and not Cinema4D, its safe to say, i consider the cinebench more than synthetic test, since rendering is very much real-life workload to me.

I was considering PCmark99 too, as i recall using it on those first computers of mine and i remember those scores, but i think that one was not multithreaded...so i guess that made it no go. Not sure about other alternatives like Geekbench either...

Anyway, onto the CPUs i owned and their CB scores at stock clocks:

the first CPU i owned was Duron 700MHz (Spitfire core). Could not find CB R15 score for it, but knowing its R10 score and then both scores for some other CPUs, by doing a little comparison / extrapolation i can infer its hypothetical score to be 10 CB points...

and then the rest

Athlon 64 3200+ (Venice core): 40 CB points (4x faster)
Core 2 Duo E8400 3GHz: 141 CB points (3,5x faster)
Core i7 980X 3,3GHz: 770 CB points (5,5x faster)
Core i7 6850K 3,6Ghz: 1150 CB points (1,5x faster)

Conclusion: The performance between my first and last (so far) CPU (over the time period spanning 16 years) increased 115x :p That does not look that bad. However, the increase was mostly happening over the course of initial 10 years. Last 2 CPUs though, awwww. 6 years apart and only 1,5x speedup. I guess in regard to Threadripper/ i9, it was about time!

Cant wait for 7920X/7940X - one of those will be mine! And with the Threadripperesque scores they are gonna have, the speedup against my beloved Duron will be cca 300x! Huzzah!
 
  • Like
Reactions: lightmanek

SPBHM

Diamond Member
Sep 12, 2012
5,065
418
126
yes but, if you look at per core performance, even the current fastest CPU is under 200, that's what? 13 years after the A64, also over 2x the clock.
 
  • Like
Reactions: Timmah!

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,935
15,906
136
Seeing all those fancy Threadripper Cinebench results got me thinking how much it is faster than my current CPU, which i bought just last year and subsequently how much actually CPUs improved since my first own computer, which was IIRC sometimes in 2000.

Not really sure, what would be the best benchmark app to measure this performance increase, since as we all know there are different kinds of workloads and whatnot, new instructions introduced over time, different operating systems, etc... etc... so i am sticking to Cinebench as measurement tools, as i can find results for all the CPUs i owned bar the very first one and it seems to scale well with both additional cores and frequency...

not to mention i do actual rendering for living (partially), although using different renderer and not Cinema4D, its safe to say, i consider the cinebench more than synthetic test, since rendering is very much real-life workload to me.

I was considering PCmark99 too, as i recall using it on those first computers of mine and i remember those scores, but i think that one was not multithreaded...so i guess that made it no go. Not sure about other alternatives like Geekbench either...

Anyway, onto the CPUs i owned and their CB scores at stock clocks:

the first CPU i owned was Duron 700MHz (Spitfire core). Could not find CB R15 score for it, but knowing its R10 score and then both scores for some other CPUs, by doing a little comparison / extrapolation i can infer its hypothetical score to be 10 CB points...

and then the rest

Athlon 64 3200+ (Venice core): 40 CB points (4x faster)
Core 2 Duo E8400 3GHz: 141 CB points (3,5x faster)
Core i7 980X 3,3GHz: 770 CB points (5,5x faster)
Core i7 6850K 3,6Ghz: 1150 CB points (1,5x faster)

Conclusion: The performance between my first and last (so far) CPU (over the time period spanning 16 years) increased 115x :p That does not look that bad. However, the increase was mostly happening over the course of initial 10 years. Last 2 CPUs though, awwww. 6 years apart and only 1,5x speedup. I guess in regard to Threadripper/ i9, it was about time!

Cant wait for 7920X/7940X - one of those will be mine! And with the Threadripperesque scores they are gonna have, the speedup against my beloved Duron will be cca 300x! Huzzah!
My Threadripper does 3408 on CB15
 

moinmoin

Diamond Member
Jun 1, 2017
5,203
8,365
136
yes but, if you look at per core performance, even the current fastest CPU is under 200, that's what? 13 years after the A64, also over 2x the clock.
That's why AMD talked about "breaking constraints of Moore's law", further per core performance improvements are relatively hard to achieve outside of increasing frequencies.
Lbj0EFL.jpg


In that regard I wonder if we will ever see a breakthrough with Core-Fusion (a hardware/compiler concept running sequential logic on multiple cores) to increase single thread performance.
 

Timmah!

Golden Member
Jul 24, 2010
1,560
912
136
yes but, if you look at per core performance, even the current fastest CPU is under 200, that's what? 13 years after the A64, also over 2x the clock.

Yeah, looking at it that way, its not that awesome, i admit. Quantum PCs cant come fast enough :)
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Gains were impressive in the 80s - 00s, but after Sandy Bridge / 2011 improvements have slowed to a crawl on the desktop.

Unless you do need to do something like rendering or video encoding or run 20 VMs the gains from Threadripper are not so great.

I'll finally be upgrading my i5-2500 non-K gaming system this year, but the 30%(?) speed bump after 6.5 years (for 4-core loads) is pretty sad.
 

nathanddrews

Graphics Cards, CPU Moderator
Aug 9, 2016
965
534
136
www.youtube.com
Even a Pentium 4 661 can be a decent daily driver, given the right platform (775 w/8GB DDR2, SSD, PCIe, GTX 660, etc.). At the end of the day, it's just a CENTRAL PROCESSING UNIT, not the be all, end all component. Come to think of it, this video is just more evidence that GPUs are the single most important components in a computer. Even if you don't play games, the acceleration of desktop animations, encode/decode support of nearly every major codec, and other OpenCL/CUDA acceleration gains in applications can completely make or break your experience.

I have to wonder how well my old Tualatin or Thunderbird would stack up today if they could use DDR4, PCIe, SATA/NVME. Oh well. (EDIT: Now that I think of it, pretty bad. haha)

 
Last edited:

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Per core performance has slowed massively. It's all about packing on the cores with minimal improvements generation to generation. Also, prices need to be considered as well. Ryzen thankfully is
bucking the trend forcing Intel to do so as well.

The other argument is "how much more CPU power do we realistically need?" Just like with that P4 video above me, I could very easily argue that for what we typically use computers for, a much weaker processor could do the job, and that the software we use today is extremely bloated to appeal to our visual senses while monitoring us and running all the crap we want at the same time. Also, the power of today's CPUs is realistically used for very short bursts for when it's needed in most consumer devices, esp mobile devices to preserve battery life. You don't need a quad-core to just browse the internet, buy crap off Amazon, or watch Netflix.
 
  • Like
Reactions: nathanddrews

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
could very easily argue that for what we typically use computers for, a much weaker processor could do the job, and that the software we use today is extremely bloated to appeal to our visual senses

Honestly I think software has been getting less bloated. It was annoying back in the day with single and dual core systems having just a virus scanner take up a whole core during any kind of file access. Maybe in terms of storage they aren't getting much smaller but programs have been increasingly shrinking in overhead costs. Making a faster CPU even less necessary. What I will say is that CPU core count and memory has made computers be monster multitaskers compared to the past (so much so that in a work setting program usage has exploded to a point that people expect to be able to run a million instances of everything and a machine is "slow" for not being able to do so). But also there are increasing professional and prosumer circumstances that can legitimately use these larger CPU's. Part of this is probably in the Intel segmentation. I3's and i5's have been a staple of the general user and by not giving them extra resources, along with desired support with Atom and ARM solutions, the lack of growth and movement of general computing to mobile devices has lead to slimmer overheads while all the prosumer/Pro stuff has their sights set on Xeon solutions.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
"Performance" has been all about power consumption, not raw processing speed.
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
"Performance" has been all about power consumption, not raw processing speed.
Said no nobody ever. Power usage only became a thing mid to late into the P4's life when laptops were becoming more and more acceptable and as far as desktops are concerned even now it's mediocre portion of the worry. Even when looking at Threadripper vs. SL-X the major issue isn't even the power usage, but cooling, and overall thermal properties. But honestly if it wasn't for worrying about cooling the CPU I would only be worried about idle power usage and wouldn't care if my CPU used 100w or 300w during heavy usage. It's a checkbox that people use when they're evangelizing their chosen selection but its not going to be that big of a factor.

Obviously this applies to general purpose and performance desktops. There are reasons to get a "desktop" CPU for certain form factor's where performance per watt matters more.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Said no nobody ever. Power usage only became a thing mid to late into the P4's life when laptops were becoming more and more acceptable and as far as desktops are concerned even now it's mediocre portion of the worry. Even when looking at Threadripper vs. SL-X the major issue isn't even the power usage, but cooling, and overall thermal properties. But honestly if it wasn't for worrying about cooling the CPU I would only be worried about idle power usage and wouldn't care if my CPU used 100w or 300w during heavy usage. It's a checkbox that people use when they're evangelizing their chosen selection but its not going to be that big of a factor.

Obviously this applies to general purpose and performance desktops. There are reasons to get a "desktop" CPU for certain form factor's where performance per watt matters more.

Cooling and noise. (At least for me.)

Cooling became a thing partly because the P4 duals were space heaters. So I got an Athlon A64 X2 for my gaming PC, and used a giant (for its time) Zalman flower heatsink to keep the noise lower.

If I wasn't waiting to see what Coffee Lake brings I'd be seriously considering a 7700 over a 7700K for my gaming PC upgrade just for 65 watts TDP vs. 91 with almost the same performance ("good enough" per core the Ryzen fans would definitely claim). It annoys me that intel was forced to offer a "factory overclocked" i7 to get around their lack of significant process improvements.
 

gipper53

Member
Apr 4, 2013
76
11
71
I've had computers in my life almost as long as I can remember... starting with the Commodore 64 when I was about 6. Every few years we got a new computer and it was always a mind-blowing experience how much faster the new PC was. 4 year gaps were going from a 60Mhz Pentium to a 300Mhz Pentium. Similar jumps the next 10 years.

That's tapered off a lot, but this marks the first time ever that I've had a PC over 4 years (i7 3770) and don't really have a strong desire, let alone "need" to upgrade it. Back in the day, after four years the machine would be so slow it would be painful to use.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
It annoys me that intel was forced to offer a "factory overclocked" i7 to get around their lack of significant process improvements.

It may be "annoying" but that's the reality. End of scaling. Improvements will exist but will come from rethinking the way they used to do things and refinements over time. Intel used to say if cars had Moore's Law we'd have $2 cars that go at 3,000 mph with 500mpg or something to that degree. Try flipping that around and imagine computer chips moving at the rate automobile technology is improving.

Cars have stopped massive improvements decades ago because physics and reality became the brick wall. Computers just happened to have started at a much lower level. Model T cars in the 1900s would have been useful for transportation, even though it seriously lacks lots of things compared to cars today. Early computers are nearly useless to everyone today.
 

moinmoin

Diamond Member
Jun 1, 2017
5,203
8,365
136
It annoys me that intel was forced to offer a "factory overclocked" i7 to get around their lack of significant process improvements.
In a way Intel started this much earlier already, a big part of the performance "improvements" in the gens since Sandy Bridge were due to increasing frequencies, be it for base, turbo boost 1, 2, 3 or the uncore, RAM speed etc. That's why we reached the point were the manufactures use up all the overclocking headroom themselves. Today's CPUs are no longer really defined by the speed they can reach but by the thermal envelope that allows (or doesn't allow) specific amount of used cores to reach specific frequencies. Which is why the focus is on micromanaging power usage (which enables the increasingly more fine grained turbos to begin with).
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
163
106
It may be "annoying" but that's the reality. End of scaling. Improvements will exist but will come from rethinking the way they used to do things and refinements over time. Intel used to say if cars had Moore's Law we'd have $2 cars that go at 3,000 mph with 500mpg or something to that degree. Try flipping that around and imagine computer chips moving at the rate automobile technology is improving.

Cars have stopped massive improvements decades ago because physics and reality became the brick wall. Computers just happened to have started at a much lower level. Model T cars in the 1900s would have been useful for transportation, even though it seriously lacks lots of things compared to cars today. Early computers are nearly useless to everyone today.
Cars also have a lot of moving parts so, bad analogy.
 

nathanddrews

Graphics Cards, CPU Moderator
Aug 9, 2016
965
534
136
www.youtube.com
Cars also have a lot of moving parts so, bad analogy.
So do CPUs, depending upon what scale of movement you're talking about. ;-)

I think more on point in regard to cars vs. CPUs is probably the improvement and addition of features. Features like instruction enhancements, security modules, cache improvements, transistor density, package size, etc. could all be equated to fuel injection, variable valve timing, airbags, crumple zones. Likewise, things like audio/video enhancements gained from iGPUs could be compared to improvements in car infotainment systems, autonomous vehicles, and so on.

Cars may not do a whole lot more than they did 100 years ago (point A to point B), but I would argue that the way in which they do it is vastly superior.
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
163
106
So do CPUs, depending upon what scale of movement you're talking about. ;-)

I think more on point in regard to cars vs. CPUs is probably the improvement and addition of features. Features like instruction enhancements, security modules, cache improvements, transistor density, package size, etc. could all be equated to fuel injection, variable valve timing, airbags, crumple zones. Likewise, things like audio/video enhancements gained from iGPUs could be compared to improvements in car infotainment systems, autonomous vehicles, and so on.

Cars may not do a whole lot more than they did 100 years ago (point A to point B), but I would argue that the way in which they do it is vastly superior.
Why yes, that was part of what I was trying to highlight. If you look only at top speed then sure the cars have stopped progressing for about a decade or so. If you'd look at efficiency they're just getting better, same goes for safety, emissions et al.
 
  • Like
Reactions: nathanddrews

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Said no nobody ever. Power usage only became a thing mid to late into the P4's life when laptops were becoming more and more acceptable and as far as desktops are concerned even now it's mediocre portion of the worry.

Every modern CPU/GPU is thermally limited.

It's more like everyone says always.
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Every modern CPU/GPU is thermally limited.

It's more like everyone says always.

You are talking about about manufacturer restrictions based on available cooling and has more to do with Intel originally developing throttling. The silicon itself is limited to max temps based on understanding when the transistors will degrade and break down. Everything has thermal limits. But that doesn't mean that power usage is
"Performance" has been all about power consumption, not raw processing speed.
as you noted. There are situations where perf per watt has more an effect than outright performance. But the desktop and specifically the enthusiast desktop environment isn't driven by power usage almost at all.

Even with SL-X and the talk about its power usage has really almost nothing to do with power. But the difficulty in keeping temperatures down. That with pretty decent cooling its hard to keep a mediocre overclock below the 100c ceiling. Some of it is transfer plate size, some of it the die size, some of its the power requirement, a lot of it is the terrible heat transfer characteristics of the substance they use to get the die and the heat plate connected (and the gap between them). But if 250w SL-X came out that worked will with a 250w rated cooler and ran at 50C idle and at full power usage with that cooler stayed at 80c (well below the 100c limit) no one would have a problem with it except people who weren't going to use it anyways as one of their reasons. Threadripper uses more power but that hasn't been an issue because contact area, split dies, and limited OC potential and great die/coldplate/cooler transfer has made it very manageable to cool.

Basically power usage in development matter because you have to set limits and give detailed info for third party devices. But it makes little difference and only plays a part in areas that have limitations on cooling potential usually based on tight dimensions for the full unit.

I feel we are debating similar points but just from another angle. Maybe if you expanded on your point instead of throwaway one-liners we can clear up the confusion.
 

Excelsior

Lifer
May 30, 2002
19,047
18
81
Even a Pentium 4 661 can be a decent daily driver, given the right platform (775 w/8GB DDR2, SSD, PCIe, GTX 660, etc.). At the end of the day, it's just a CENTRAL PROCESSING UNIT, not the be all, end all component. Come to think of it, this video is just more evidence that GPUs are the single most important components in a computer. Even if you don't play games, the acceleration of desktop animations, encode/decode support of nearly every major codec, and other OpenCL/CUDA acceleration gains in applications can completely make or break your experience.

I have to wonder how well my old Tualatin or Thunderbird would stack up today if they could use DDR4, PCIe, SATA/NVME. Oh well. (EDIT: Now that I think of it, pretty bad. haha)


Yeah so I'm finally now upgrading from my C2Q6700, 8GB DDR2, SSD, and GTX 970 (a recent purchase from a friend) after about 9 years. It is still serviceable as a daily driver and I can even play many games up until about 2013 or 2014. Though lately I'm playing the UT Pre-Alpha and its really ok.
 

nathanddrews

Graphics Cards, CPU Moderator
Aug 9, 2016
965
534
136
www.youtube.com
Yeah so I'm finally now upgrading from my C2Q6700, 8GB DDR2, SSD, and GTX 970 (a recent purchase from a friend) after about 9 years. It is still serviceable as a daily driver and I can even play many games up until about 2013 or 2014. Though lately I'm playing the UT Pre-Alpha and its really ok.
Cool! What are you going to get?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Cars may not do a whole lot more than they did 100 years ago (point A to point B), but I would argue that the way in which they do it is vastly superior.

They improved, but not at the rate computer chips did(well that's obvious).

At the high end it did not improve much for cars. It's kinda similar to how the improvements are into efficiency and in low power parts with chips too.

At some point everything reaches a point where its not worth doing so because its not practical.