How difficult would it be...

Haserath

Senior member
Sep 12, 2010
793
1
81
How difficult would it be to have different voltage for each stage in a processor? Slower ones could receive higher volts to keep up and fast ones wouldn't take as much power, but I don't even know if this is possible with current processor design techniques/technologies.

Would it be easier to decouple frequencies on certain things like decode(say, if it kept the rest of the chip from clocking higher or volting lower) and just run a wider decode to make up for it?

Imagine how efficient that could be if every stage could run at a certain ratio of voltage that only it nfeeded for a certain frequency...

The chip would still have a limited overclock by how much voltage the slowest stage could handle, but it would use less power during that time.

They would even be able to get better temperature readings, because they could place the sensors next to the highest consuming part and know that's the limit.

I bet AMD could realize Bulldozer's full potential with a combination of the two(6ghz+ too much to ask on certain parts of the chip? Prescott ran the ALU at 8Ghz+ IIRC). 4ghz isn't bad considering the slowest stage is limiting.
 
Last edited:

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Intel has an FPU design that will do 8GHz, but its a real problem getting the rest of the chip there. It sounds like an interesting problem, you would need a lot of different VRMs and voltage pins to pull it off so it will increase board space requirements and cost.

I suspect Intel already steps the voltage internally if necessary.
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
If anything this should be easier going forward as Intel is moving the VRM's onto the CPU with Haswell.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
Intel has separate voltages for the "core" and "uncore" components of some processors. However, power transistors also have their own die area and power consumption costs, so I would think that they wouldn't want too many voltages on a chip. If it got to the point where half of the transistors were for power regulation and half were logic/memory, that would seem to be an obvious waste.

Furthermore, a logic gate cares about what frequency it is operating at, not if it is a decode transistor or an ALU. In order to sync the various stages of the pipeline, the execution units all operate at the same frequency (although some older designs call for some double-pumped sections). If the whole pipeline operates at the same frequency, I don't see where transistors of a different voltage would be called for.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
Intel has separate voltages for the "core" and "uncore" components of some processors. However, power transistors also have their own die area and power consumption costs, so I would think that they wouldn't want too many voltages on a chip. If it got to the point where half of the transistors were for power regulation and half were logic/memory, that would seem to be an obvious waste.

Furthermore, a logic gate cares about what frequency it is operating at, not if it is a decode transistor or an ALU. In order to sync the various stages of the pipeline, the execution units all operate at the same frequency (although some older designs call for some double-pumped sections). If the whole pipeline operates at the same frequency, I don't see where transistors of a different voltage would be called for.
All the logic transistors need power anyway, but it would definitely need more control. Since, theoretically, power would be lower, it shouldn't need to deliver as much.

The difference between stages may not be the logic but the amount/complexity of logic. There is also queues in between stages that hold the data to keep things in sync.

Different voltages would mean every circuit faster than the slowest one could use lower voltage to switch at the same speed.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
A couple things come to mind. You will need to insert a voltage level shifter on every stage. That is a LOT of timing, area and power you need to account for. The next is that you need to handle compromised power supplies. You power grid needs to be strapped as much as possible to prevent voltage droop. If the new voltage domain covers only logic that's stringy and small, you may not be able to build a robust enough power grid. You ideally want to cover a larger region where the power intense area is ideally far away from the grid cuts.

But yes, multi voltage domains is good and even same voltage (where one can shut off) is good too. One gets more power savings while the other just needs isolation gates and not level shifters.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
I forgot to add frequency in the second paragraph.
A couple things come to mind. You will need to insert a voltage level shifter on every stage. That is a LOT of timing, area and power you need to account for. The next is that you need to handle compromised power supplies. You power grid needs to be strapped as much as possible to prevent voltage droop. If the new voltage domain covers only logic that's stringy and small, you may not be able to build a robust enough power grid. You ideally want to cover a larger region where the power intense area is ideally far away from the grid cuts.

But yes, multi voltage domains is good and even same voltage (where one can shut off) is good too. One gets more power savings while the other just needs isolation gates and not level shifters.
The only company I reasonably expect could pull this off by the end of the decade would be Intel. They've probably thought of it and either found it impossible(for now), not profitable, or not worth their time yet.

If Intel creates their on die VRM circuitry, they could possibly control this well enough. They could separate the circuits from power delivery, but keep timing in check. Intel already plans on working with motherboard makers to create boards with enhanced timings for power savings.

Haswell might fit into phones with a good implementation...
 

Haserath

Senior member
Sep 12, 2010
793
1
81
Intel has separate voltages for the "core" and "uncore" components of some processors. However, power transistors also have their own die area and power consumption costs, so I would think that they wouldn't want too many voltages on a chip. If it got to the point where half of the transistors were for power regulation and half were logic/memory, that would seem to be an obvious waste.

Furthermore, a logic gate cares about what frequency it is operating at, not if it is a decode transistor or an ALU. In order to sync the various stages of the pipeline, the execution units all operate at the same frequency (although some older designs call for some double-pumped sections). If the whole pipeline operates at the same frequency, I don't see where transistors of a different voltage would be called for.

Actually thinking about it some more. Even if they had to add transistors, voltage scaling has immense power savings. It would basically sacrifice die area for lower power.

With Intel having to find more ways to fill fab space and enter the phone/tablet market, this would be perfect.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
I forgot to add frequency in the second paragraph.

The only company I reasonably expect could pull this off by the end of the decade would be Intel. They've probably thought of it and either found it impossible(for now), not profitable, or not worth their time yet.

If Intel creates their on die VRM circuitry, they could possibly control this well enough. They could separate the circuits from power delivery, but keep timing in check. Intel already plans on working with motherboard makers to create boards with enhanced timings for power savings.

Haswell might fit into phones with a good implementation...

Multiple voltage domain parts exist today. But yes, not at the granularity of having every pipestage running at different voltages and frequencies. I would say the massive overhead with different voltage domains (that I mentioned) and synchronizers (for frequency differences) does not net you an overall savings. There is a balance if "getting the biggest bang for your buck". You are correct that the inevitable path is to continue exploring finer points if granularity. Probably not every pipestage but maybe only major logic blocks (front end decode, fp execution etc...) and probably starting with voltage domains first.
 
Last edited:

sm625

Diamond Member
May 6, 2011
8,172
137
106
You cant easily decouple frequencies within a core. Doing so makes your design asynchronous. You are getting into neural network territory there. Great potential, but way over the heads of most binary logic engineers. So... if you cant decouple frequencies then there isnt much reason to have separate voltages.

The diagram below is for a single pentium 4 core, but the following argument applies to pretty much every recent intel cpu
prescott-block-diagram.gif

You can take this core and make it run at one clock, then make another core run at a different clock. You can make the circuitry that connects those cores run at a totally different clock. That's all intel can do, even up to haswell. One clock per core, and separately clocked uncore sections such as cache and memory controller. But if you want, for example, to make those three ALUs run at different clocks and different voltages, you gotta throw away the entire design and start from scratch.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
The premise of this thread is basically why ARM's big.LITTLE exists.

Instead of attempting to create asynchronous voltage/frequency domains within a given core, you create heterogeneous cores and control them in a granular level in a way that can be reliably DFM'ed (designed for manufacturing) with existing layout tools as well as reliably validated and binned en masse at the other end of the production line.

It is all about trade-offs, and not just the engineering kind discussed in this thread. As a business there are product trade-offs that must be considered when it comes to saleability and marketability.

Trade-offs must be made which account for yields, tester time, and how well the various features of the product can be pitched as adding value to the end-user.

For example, a fine-grained asynchronous double-pumped ALU might get you performance you want in your Pentium 4 design but it is a feature that on its own may not be packaged and sellable to the end-user because they just won't get why they should care about it.

The big.LITTLE concept is something that is marketable in part because it is a concept that the general consumer can grasp and be led to view as being a key feature of product xyz; and thus becomes an actionable factor in a purchase decision.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
It seems easy to market. 50% higher clock, less power/energy used, extremely fine control over power.

This could basically be another Conroe for Intel. At first they could be conservative, then slowly make their way to finer levels for yield purposes.

They would possibly use their same core design(on a high level) but rework everything for this.

big.LITTLE is fine and all... But I'd rather see BIG with little's energy use. That's what Intel seems to be aiming for ATM anyway.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
Funny how Haserath ignores the comments on actual engineering and insists on 50% higher clock and lower power. Since it sounds like you have the details worked out, I guess you are waiting to hear back from Intel's HR Department?

I wouldn't be surprised if Intel looked at this and found it used more power. More voltage planes costs more power and die space (power transistors are not made on the smallest nodes), buffers for voltage and frequency changes costs power, more VRMs (at least 1-2 for each voltage) cost power even if they are on the chip package.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
Funny how Haserath ignores the comments on actual engineering and insists on 50% higher clock and lower power. Since it sounds like you have the details worked out, I guess you are waiting to hear back from Intel's HR Department?

I wouldn't be surprised if Intel looked at this and found it used more power. More voltage planes costs more power and die space (power transistors are not made on the smallest nodes), buffers for voltage and frequency changes costs power, more VRMs (at least 1-2 for each voltage) cost power even if they are on the chip package.

The laws of physics don't work in dreams.