Question Why did older CPUs not use more power / clock higher?

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Reading the Interview with Jim Keller I noticed this sentence:

which at the time we thought were huge. These were 300 square millimeters at 50 watts, which blew everybody's mind.

Yeah, I think 486s and below didn't even have heat sinks if I recall correctly.

But my question is why? Why weren't they designed to user more power and hence increase their computing capabilities? Was it process limitations? tool limitations in CPU design?
 
  • Like
Reactions: Tlh97 and CHADBOGA

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
They were likely clock limited by design, we didn’t cross 1 ghz mark until p3 (top dog was about 30w tdp). I blame the process, but it must be a number of other things as well.
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
3,973
731
126
All the components of the CPU where so huge back then and there where so few of them that they couldn't use more power no matter how much they wanted to, they would just burn out.
All the branch prediction, SMT, AVX, cache, and so many other things that draw a lot of power today didn't even exist back then.


Edit:
A single core of a modern CPU still uses about 50W including the package.
PerCore-2-5900X.png
 

Soulkeeper

Diamond Member
Nov 23, 2001
6,712
142
106
Many of the older CPU's had fewer stages in the pipeline which made them less well suited for higher clock speeds.
Not to mention countless other advancements in design and process tech.
The higher power usage we see today is atleast in part due to the competitive nature of the industry imo.
Power usage for your typical video cards today is insanely higher than it was years ago.
 

Doug S

Platinum Member
Feb 8, 2020
2,285
3,559
136
There are a lot of factors to making CPUs draw more power than simply increasing the voltage to crank the clock. You need to be able to cool it, you need to be able to deliver clean power, you need to be assured it won't be damaged over time, and it has to be able to operate without error when clocked higher.

These sort of things always start out in the high end (which back then was the RISC workstation market) and filter their way down to the mainstream.

A lot of the power increase was driven by necessity - or rather lack of necessity. so long as Moore's Law was running smoothly and you could get huge performance bumps every couple years with a new process there was less incentive to go looking for ways to make computers draw more power or be louder. It is only when the easy performance gains became harder to come by that they started looking elsewhere, and increasing power budgets was the next easiest solution second only to letting Moore's Law do its thing.

You can look at Intel's progression - they didn't start really cranking the TDP until Pentium 4. A high clock heavily pipelined CPU is by necessity going to draw more power, and Intel was able to crank up those marketing MHz by leaps and bounds through a combination of improved process and higher TDP. They were talking about eventually reaching 10 GHz, but long before they reached it they hit a "power wall" - a place where increasing the power further was having little benefit. They knew they'd never reach 10 GHz, and people were starting to rebel against PCs sounding like a jet taking off (because the heatsink/cooling technology for such high power CPUs hadn't caught up yet)

As time has gone on and new processes help less and less for performance increase, the only "easy way" to increase performance has been through increases in the power budget / allowable TDP to allow more cores and tout multithreaded benchmarks as being the most important. That TDP increase has come both overtly in selling CPUs with a higher listed TDP, and covertly e.g. Intel's constantly changing definitions about how much power can be drawn for short bursts in turbo modes.

Meanwhile you've had the mobile world running alongside with the same power levels for the past decade plus, because phones have some pretty hard constraints on power. No matter how well it performs you will have a hard time selling a phone that gets hot enough it becomes uncomfortable to hold, and the higher the power draw the bulkier it is due to needing a bigger battery. So there's a hard limit that simply doesn't exist in the PC world that prevents that kind of power limit creep.
 

Cogman

Lifer
Sep 19, 2000
10,277
125
106
Older CPUs COULDN'T use more power. There's ultimately a size limit for how big you can make CPUs (Due to silicon constraints) and that constrained the number of transistors you could place on CPUs.

As early CPUs went through node shrinks, another thing that happened is they actually saved power. That's because, for a while, with every step down the transistor switching speed went down which ultimately meant power consumption got better.

That trend started to change around the 0.25 micrometer size. The power gains started tapering off while the transistor density increased. It actually reversed (smaller nodes needed more power) which required the invention of FinFets.

Now, node shrinks provide little in power consumption benefits and mostly only allow for more densely packed transistors. As a result, AMD and Intel have spend a lot of money on powergating their CPUs to ultimately avoid blowing the power budget.
 
  • Like
Reactions: Magic Carpet

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
But my question is why? Why weren't they designed to user more power and hence increase their computing capabilities? Was it process limitations? tool limitations in CPU design?

Simple answer: That's how progression works. You learn 1+1 before you learn about exponents. And you learn your ABCs before you learn 1+1.

Detailed answer: Many reasons.

-Heatsinks weren't developed. It was pretty much due to computers heatsink technology improved. Modern HSFs can handle 250W. Nevermind 30 years ago, 10 years ago 100W was hard to deal with.
-CPUs were unstable at lower temperatures and modern ones are much more robust. I remember having stability issues on Pentium IIIs. When it reached 70C it did that. I cooled it and no issues anymore.
-Operating systems added to that as well. Anyone remember pre-Vista days Windows crashing all the time, or getting errors?
-Research wasn't ready to implement technologies. It's said for computer science many things we use today have been in existence as ideas for 30+ years.

Like the reason "AI" is viable is because we have petabytes and exabytes to work on. The fundamentals have always existed. It wasn't ready when RAM sizes were in the kilobytes range.

Superscalar, advanced branch prediction, multi-stage caches, are all ideas that were only in research papers.

Heck, even dividing up parts of the CPU for the sole purpose of increasing pipelines to increase clock speeds is an idea!
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
7,870
6,103
136
A lot of it just comes down to a simple case of "but that's never been done before" and to a certain degree you can't have massive jumps without intentionally investing in other areas that are required to support them.

When CPUs were passively cooled you couldn't just crank out a new model that required coolers that didn't exist to be on the market. Half of the design team at the time would have thought you barbaric for even suggesting going down that road.

Really it's just slow progress and competition that pushes the state of the art. At some point companies couldn't make a good enough CPU to beat the competition if it was being passively cooled and so the boundaries were pushed a little. And then a little bit more. Do that long enough and now no one bats an eye at 300W CPU coolers.

A lot of the cool stuff we see today isn't really all that new. A lot of it was researched years or decades ago but wasn't feasible at the time for any number of reasons. Either those constraints no longer exist or the technology underwent enough evolutions to work around the problems preventing it from becoming mainstream before.
 
  • Like
Reactions: Magic Carpet

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
-Operating systems added to that as well. Anyone remember pre-Vista days Windows crashing all the time, or getting errors?
Absolutely, in fact I’ve just experienced such a crash on my retro rig. See the attached pic. That’s Windows 9x at its finest 😁 But you were really meant to say, pre Windows 2000 days. Luckily, Microsoft was smart enough to move NT into consumer space.B91182C1-192D-4422-9CBA-EC3B1FB5D590.jpeg
Where is the god damn ANY key?! /sarcasm.
 
Last edited:
  • Like
Reactions: lightmanek

Doug S

Platinum Member
Feb 8, 2020
2,285
3,559
136
Absolutely, in fact I’ve just experienced such a crash on my retro rig. See the attached pic. That’s Windows 9x at its finest 😁 But you were really meant to say, pre Windows 2000 days. Luckily, Microsoft was smart enough to move NT into consumer space.


If anything that makes it EASIER to crank up the power on a CPU - since the end user will just blame Microsoft for all the crashes whether they are responsible or a CPU that's unstable at the clock rate and/or temperature at which it is operating!

That's probably one of the reasons we went from parity memory to unprotected memory and ECC was never a thing on the desktop. It was silly to worry about memory stability when you are running a crap OS that crashes on its own far more often than you'll see a memory error. OK part of that was Intel's greed wanting to make people pay more to get ECC support, but consumers not having a reason to care made that easy to do.
 
  • Like
Reactions: Magic Carpet

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
Meanwhile you've had the mobile world running alongside with the same power levels for the past decade plus, because phones have some pretty hard constraints on power. No matter how well it performs you will have a hard time selling a phone that gets hot enough it becomes uncomfortable to hold, and the higher the power draw the bulkier it is due to needing a bigger battery. So there's a hard limit that simply doesn't exist in the PC world that prevents that kind of power limit creep.

This^^

I wouldn't mind a slightly bulkier phone with a bigger battery of course.

Where is the god damn ANY key?! /sarcasm.

You could (can still?) get keyboards or keycaps with an Any-key. Very practical for certain family members... ;)
 
  • Like
Reactions: Magic Carpet

naukkis

Senior member
Jun 5, 2002
706
578
136

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Meanwhile you've had the mobile world running alongside with the same power levels for the past decade plus, because phones have some pretty hard constraints on power.

That's not exactly true. We went from having 5WHr batteries to nearly 20WHr. Actually flip phones lasted weeks on battery.

Some phones use heatpipes while the first generation smartphones used lot smaller coolers.
 

Doug S

Platinum Member
Feb 8, 2020
2,285
3,559
136
That's not exactly true. We went from having 5WHr batteries to nearly 20WHr. Actually flip phones lasted weeks on battery.

Some phones use heatpipes while the first generation smartphones used lot smaller coolers.

All heatpipes can do is better distribute heat throughout the phone's body to avoid a hot spot. It doesn't lift the thermal constraint at all.

The main reason modern phones CAN have a larger battery than flip phones is because they have much larger screens, so there is more area under which to fit the battery.

One of Samsung's engineers even said the reason they followed the no name Chinese companies that started the "phablet" trend is because it allowed them to make the battery bigger without making the phone thicker.
 
  • Like
Reactions: Magic Carpet

Mopetar

Diamond Member
Jan 31, 2011
7,870
6,103
136
All heatpipes can do is better distribute heat throughout the phone's body to avoid a hot spot. It doesn't lift the thermal constraint at all.

The main reason modern phones CAN have a larger battery than flip phones is because they have much larger screens, so there is more area under which to fit the battery.

One of Samsung's engineers even said the reason they followed the no name Chinese companies that started the "phablet" trend is because it allowed them to make the battery bigger without making the phone thicker.

If you can spread the heat to a wide enough area it does mean that you can run hotter than you otherwise could. Of course no one wants to carry around a sophisticated cooling system in their phone because the extra bulk and weight just aren't worth the tradeoffs.

Flip phones also weren't the only popular devices prior to modern smartphones. Nokia had those large candy-bar style phones that were incredibly popular and had plenty of room for battery.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I'm saying, flip phones also lasted for weeks before recharging. That's in use, not just idling.

You add more features, you use more power. That's how things go.
 

Doug S

Platinum Member
Feb 8, 2020
2,285
3,559
136
I'm saying, flip phones also lasted for weeks before recharging. That's in use, not just idling.

You add more features, you use more power. That's how things go.

What flip phone did you have that lasted weeks in use as opposed to weeks of standby time? I had a Razor for a couple years, I charged it every few days. I charge my 11 Pro Max every two days (and used to charge it every three when it was brand new) despite using it more hours per day. Granted today that use is almost all apps and messaging, with almost no calling.

The Razor was almost all calling with a little messaging and the only time I ever did anything else was using klunky WAP capability to check the radar when I was golfing to see if the dark clouds approaching contained rain or not lol!
 
  • Like
Reactions: Thunder 57

coercitiv

Diamond Member
Jan 24, 2014
6,221
12,013
136
All heatpipes can do is better distribute heat throughout the phone's body to avoid a hot spot.
You mean they increase effective surface area available for heat dissipation while also increasing average temperature of this surface within manufacturer's skin temperature settings? Sounds to me that heatpipes can have an exponential effect on cooling performance.
 

Doug S

Platinum Member
Feb 8, 2020
2,285
3,559
136
You mean they increase effective surface area available for heat dissipation while also increasing average temperature of this surface within manufacturer's skin temperature settings? Sounds to me that heatpipes can have an exponential effect on cooling performance.

The heat is going to spread around regardless, so it will always be radiating from the entire surface area of the device. Radiative cooling is more efficient the larger the temperature difference, so a "hot spot phone" will radiate better from hot spot(s) and less well from the rest of the surface than a "heat pipe phone" that did a better job of evening out the temperature.

So maybe an "effect" on cooling performance, but "exponential"? Hardly. Where are the heat pipe phones that burn as much power as a laptop if you think the effect is a large one?
 

Mopetar

Diamond Member
Jan 31, 2011
7,870
6,103
136
The heat is going to spread around regardless, so it will always be radiating from the entire surface area of the device. Radiative cooling is more efficient the larger the temperature difference, so a "hot spot phone" will radiate better from hot spot(s) and less well from the rest of the surface than a "heat pipe phone" that did a better job of evening out the temperature.

Heat exchanges through air rather poorly, so if you just had an SoC completely separated from the case it would probably overheat the chip to some degree. Really just giving it some contact with a case will let it spread the head out fine and in a phone the surface area of the case is ultimately the limit of how much the heat can be spread out. With a CPU/GPU cooler it's got stacks and stacks of fines to maximize the surface area, but a phone can't do that so fancy cooling solutions don't do a lot of good.

Still you wouldn't want a hot spot phone because you have to hold the thing in your hand and trying to radiate all of the heat from that one spot is going to make it impossible to hold or even a potential safety issue where it could cause minor burns. It also won't radiate better in a general sense because if that were true we wouldn't use coolers at all because the CPU die is the hottest hot spot you could get.
 

coercitiv

Diamond Member
Jan 24, 2014
6,221
12,013
136
So maybe an "effect" on cooling performance, but "exponential"? Hardly. Where are the heat pipe phones that burn as much power as a laptop if you think the effect is a large one?
Exponential relative to the base value of a phone. Heatpipes increase radiating surface and average temperature of that surface. Both factors linearly affect cooling, hence combining the two results in an exponential effect.

The heat is going to spread around regardless
The equivalent heat gradient you're thinking of likely requires unsustainable temperatures in the center section to dissipate the same amount of energy. In other words you won't get the same TDP as with the heatpipes since the SOC will throttle.


 

dullard

Elite Member
May 21, 2001
25,091
3,448
126
One of Samsung's engineers even said the reason they followed the no name Chinese companies that started the "phablet" trend is because it allowed them to make the battery bigger without making the phone thicker.
Such a shame. If only they will please make the phone thicker. That gives us more battery life and a stronger phone so we can ditch phone cases. Thin phones are just marketing crap so we end up with shorter battery life and a thicker end result.
 

Hougy

Member
Jan 13, 2021
77
60
61
Process has given switching speed for transistors. No matter how long you pipeline your cpu clock frequency can't be more than switching speed of transistors allow.


After Pentium4 it changed from transistors switching speed to power consumption, specially localized hotspots - silicon cooling capabilities start to be limiting factor.
This is pure gold. They are predicting a 45 nm node in 2007 using EUV. Is there more old stuff like this?
Edit: and terahertz transistors too
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,285
3,559
136
This is pure gold. They are predicting a 45 nm node in 2007 using EUV. Is there more old stuff like this?
Edit: and terahertz transistors too

Last I heard the current record for transistor switching speed is around 600 GHz, so they aren't THAT far away from terahertz. EUV was a couple process generations away for probably 15 years before it finally happened (and laughably, it wasn't Intel who did it as they still haven't sold a single chip made using EUV) 450 mm wafers were also about two process generations away at that time, but that will never happen.

Just because we have 5 GHz CPUs doesn't mean that's the speed transistors are switching at. Modern CPUs have something like 20 FO4 delays per cycle which implies we've had transistors switching at at least 100 GHz in our modern CPUs. Likely well in excess of that as you have to account for wire delay as well as FO4 delay in your worst case clock network.

Intel could have reached 10 GHz in the P4 without TOO much trouble - they already had the integer pipeline half clocked and running at around what 3.8 GHz when they gave up on P4 I believe. I'm sure the plan was to expose the half clocked pipeline in future iterations, so that would make it a 7.6 GHz CPU. If they'd stuck with the P4 architecture a little longer they could have reached 10 GHz before long.

The problem is that each cycle would not have been accomplishing all that much so the pipeline would be REALLY long, which is bad for branch delay and filling load/store delay slots so IPC would be terrible. It was also consuming a lot of power, at the time when consumers cared more and more about power usage in laptops and minimizing fan noise in desktops.

Reaching 10 GHz in a CPU that was power hungry and inappropriate for laptops, and wouldn't perform any better than 2.5 GHz PPro/PIII/"Core" architecture - and AMD being competitive at that time as well - would make continued sale of the P4 based on marketing MHz a difficult proposition. That's why they decided to hide clock rates behind model numbers going forward, so they wouldn't have to explain to consumers why the CPU in the latest PCs was clocked a lot slower than the one in last year's PCs after spending 20 years teaching consumers that more MHz = more performance.
 
  • Like
Reactions: Hougy and moinmoin