What is the rationale behind integrating the voltage regulator in Haswell?

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
The advantage of integrating the memory controller, expansion interfaces and even the GPU are fairly self-evident. But I'm a bit puzzled as to Intel's thinking on the voltage regulators. There seem to be some obvious drawbacks but no real obvious advantages to me.

Drawback #1: Converting voltages is never 100% efficient, so simply by doing this on-chip means extra heat to dissipate.

Drawback #2: Higher cost for Intel due to development effort (fixed cost) and die size (variable cost).

Drawback #3: Potentially more risk of problems associated with power quality because of a lack of an intermediate step between the power supply and the CPU.

All of these are manageable, of course, but what's the upside? It's great news for motherboard manufacturers, but I don't see how it helps Intel much.

What do you think?
 

borisvodofsky

Diamond Member
Feb 12, 2010
3,606
0
0
The advantage of integrating the memory controller, expansion interfaces and even the GPU are fairly self-evident. But I'm a bit puzzled as to Intel's thinking on the voltage regulators. There seem to be some obvious drawbacks but no real obvious advantages to me.

Drawback #1: Converting voltages is never 100% efficient, so simply by doing this on-chip means extra heat to dissipate.

Drawback #2: Higher cost for Intel due to development effort (fixed cost) and die size (variable cost).

Drawback #3: Potentially more risk of problems associated with power quality because of a lack of an intermediate step between the power supply and the CPU.

All of these are manageable, of course, but what's the upside? It's great news for motherboard manufacturers, but I don't see how it helps Intel much.

What do you think?

They're doing this for 3 reasons.

1.

Prevent Overclocking tricks used by Mobo vendors. LLC, vrm tuning.

2.

Definitively Decide what their customers can do with the CPU.

3.

Screw the customer

excessively colorful language removed - DrPizza
 
Last edited by a moderator:

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
1. Integrate everything you can
2. Make previously standard features as extra cost options
...
3. Profit
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
Intel is essentially squeezing other parts suppliers out, making their products a larger and larger proportion of the BoM.

It used to be you got the CPU from Intel, and the chipset, gpu, etc from somebody else.

Now, Intel will sell you the CPU, GPU, chipset, and soon power delivery. You don't even have a choice in the matter! This is actually terrible news for motherboard manufacturers, because it is relegating them more and more to 'commodity' status as they lose the ability to differentiate from each other. I would say it is great news for big-box integrators though, because it means lower design costs for them, and they don't mind because they differentiate through formfactor and added bloatware anyway.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
This is not a recent trend. CPU's ever since they were invented have been integrating more and more features. This is a large part why modern computers are cheaper than the predecessors, the features integrated into a single piece of silicon continues to go up.

Its simply progress.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
Better control over processor suppy voltages & currents?

More efficient VRMs?

Cheaper overall cost?

I would say integration=efficiency and Intel wants the highest efficiency they can get. They advertise 20x longer standby time for a Haswell system compared to previous laptops.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Can be said with pictures:

integratedGMCH.jpg

coarse.jpg

fine.jpg

intelboard.jpg
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
I's all about power savings and making things smaller.

CPU voltage is around 1-1.1v
CPU power is around 70-130W

Power = V*I

When voltage is low current is high. When current is high, resistance is a significantly larger factor than when current is low.

---------------

With that background out of the way, consider that the CPU voltage must travel across the CPU socket interface (socket pins). In an off-chip regulator scenario, the socket ends up designed to carry 100+ amps at low voltage. But an on-chip regulator can use normal +12v and the socket interface carries significantly lower current.

Consider how much current is being carried in current scenarios...A CPU socket needs to be designed to reliably carry the kind of current used to start a small car... a CPU socket.

Putting the voltage regulator on the CPU package allows the voltage carried across the socket to reduce by more than a factor of 10 (12v instead of ~1v). This dramatically reduces the necessary pin count required for power and ground transmission through the CPU interface and still have acceptable resistance, which allows smaller packaging. Or, if you keep pin count the same, you can dedicate more pins for features like CPU PCI-e lanes and less to carrying massive amounts of current at low voltage..

This is especially important in mobile form factors. reduction in complexity of the power interface can make the overall package smaller and easier to integrate into ultrabook style enclosures.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,772
4,685
136
The power mosfets , the components that take 99% of the CPUs supply losses obviously wont be integrated , it doesnt make sense since they must handle as much as 10W thermal dissipation with 100W CPU parts.

Moreover , the mosfets are driving inductances that are
absolutely not integrables , it would be just a nightmare
in matter of parasistic induced currents in sensitive parts
of the circuitry since peak currents are above 100A
in DT parts.

From the photo above we can see that this is the mosfets drivers and servo circuits that are integrated.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,098
6,561
136
Intel is moving the desktop/laptop mainstream processor to a true SoC. The full transition probally won't happen until Skylake however. Seems like the goal is overall lower system power consumption to better compete against ARM. Maybe we'll get some sort of Super Idle Mode out of it.

On the Ultrabook Haswell, it will also include the chipset on the processor in a multichip package. The rest of the mainstream processors will get this in Broadwell.
 

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
Thanks for the replies, all... especially Concillian. What you said makes perfect sense -- it's an extension of the move to powering CPUs using +12V from +5V, with less distance to have to carry a low-voltage, high-current power source. Could maybe lead to lower pin counts in time as well, perhaps?
 

Abwx

Lifer
Apr 2, 2011
11,772
4,685
136
Thanks for the replies, all... especially Concillian. What you said makes perfect sense -- it's an extension of the move to powering CPUs using +12V from +5V, with less distance to have to carry a low-voltage, high-current power source. Could maybe lead to lower pin counts in time as well, perhaps?

No , that doesnt make sense.

The CPU switching mode power supply use huge inductances
that are not at all integrable , just check how thick/large they
are in a MB , close to what you call the VRMs , wich are
actually power vertical mosfets....

High speed switched currents in theses inductances are easily
induced in nearby conductors , creating parasistic currents
amplitudes well above the ones used to switch on/off the
logical gates that are in the micro ampere range.

In matter of electromagnetic compatibility (CEM)
it would be a nightmare for EE engineers.

As i pointed it already , the picture show that the command
circuit is on the SKU but the high power switching components
are external.
 
Dec 30, 2004
12,553
2
76
^Intel use on-die fets or BJTs. Need less power? Shrink the channel and limit current allowed to CPU. No need for switched-mode PSU or inductors.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
There seem to be some obvious drawbacks but no real obvious advantages to me.

Drawback #1: Converting voltages is never 100% efficient, so simply by doing this on-chip means extra heat to dissipate.

Drawback #2: Higher cost for Intel due to development effort (fixed cost) and die size (variable cost).

Drawback #3: Potentially more risk of problems associated with power quality because of a lack of an intermediate step between the power supply and the CPU.

Not an expert that is well read in these areas but here are my guesses.

1) but the heat is now all in one location so only one cooling system is needed to be used instead of several (most being small ones).

Advantage is better usage of the power once converted as less resistances (and losse) from connectors and long wire runs (from the voltage regulator to the cpu, espically given the keep out areas used to ensure proper fit of cpu coolers).

Second advantage is direct control of the voltage regulators to reduce power consumption in low power states.

2) it is a higher cost initially, but near zero cost to mass produce. It makes development / costs of boards that use that cpu cheaper as less parts are needed, less complexity in design and so a smaller end product (demand from end consumers for smaller laptops).

3) actually, given intel has no controll over manufactures of boards, it is more likly to end up with better voltage regulators as they will be designed for the task and not cost reduced to the cheapest supplier (on the budget range of motherboards, high end boards might have better regulators than the ones on the CPU, but it is the masses intel need to address the demand from).

Besides, it might be like cache in the i7/i5 where lower speced cpus are cut back in features, so a K cpu might have 50% more voltage regulators (so addressing the issues from the overclocking market and their concerns of this change).
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
It's not like you can't have VRMs in addition to what's integrated on the processor...
 

Abwx

Lifer
Apr 2, 2011
11,772
4,685
136
^Intel use on-die fets or BJTs. Need less power? Shrink the channel and limit current allowed to CPU. No need for switched-mode PSU or inductors.

No need for switched mode PSU ??...

And how is the cpu supply voltage provided if not by an onboard
switch mode PSU that convert the 12V to about 1.1V ?..

An efficient switch mode PSU use inductances , by the definition.

As repeated ad nauseam , we clearly see that it s only
the CPU PSU control and command unit that is on the CPU.
 
Last edited:
Dec 30, 2004
12,553
2
76
No need for switched mode PSU ??...

And how is the cpu supply voltage provided if not by an onboard
switch mode PSU that convert the 12V to about 1.1V ?..

An efficient switch mode PSU use inductances , by the definition.

As repeated ad nauseam , we clearly see that it s only
the CPU PSU control and command unit that is on the CPU.

dunno. Not fets or bjts.
 

Abwx

Lifer
Apr 2, 2011
11,772
4,685
136
dunno. Not fets or bjts.

Neither of both but power vertical mosfets used as high power
switching devices to charge inductances...

Here they are :

At the first rank we see the PSU capacitors (surely integrable..:D)
On the second rank , marked R80 are the inductors and on the
third rank we can see the power mosfets.

Hey , i did forgot the high power high speed rectifying diodes....;)

gigabyte-ga-870a-ud3.28335057.jpg
 
Dec 30, 2004
12,553
2
76
Neither of both but power vertical mosfets used as high power
switching devices to charge inductances...

Here they are :

At the first rank we see the PSU capacitors (surely integrable..:D)
On the second rank , marked R80 are the inductors and on the
third rank we can see the power mosfets.

Hey , i did forgot the high power high speed rectifying diodes....;)

gigabyte-ga-870a-ud3.28335057.jpg

presumably some implementation of these: http://www.sparkfun.com/datasheets/Components/FAN1117A.pdf
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
It's not like you can't have VRMs in addition to what's integrated on the processor...

I've been thinking this same thing for quite sometime now. I'm no engineer but with my limited knowledge I'm not sure why this wouldn't be possible. Question is weather it would be necessary?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Little hint perhaps to think outside the box for the analog VRM we use today.
foxconn_digital_pwm.jpg

Digital_PWM_tn.jpg
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
The advantage of integrating the memory controller, expansion interfaces and even the GPU are fairly self-evident. But I'm a bit puzzled as to Intel's thinking on the voltage regulators. There seem to be some obvious drawbacks but no real obvious advantages to me.

Drawback #1: Converting voltages is never 100% efficient, so simply by doing this on-chip means extra heat to dissipate.

Drawback #2: Higher cost for Intel due to development effort (fixed cost) and die size (variable cost).

Drawback #3: Potentially more risk of problems associated with power quality because of a lack of an intermediate step between the power supply and the CPU.

All of these are manageable, of course, but what's the upside? It's great news for motherboard manufacturers, but I don't see how it helps Intel much.

What do you think?

You see an upside for the M/B manufacturers and no + for intel . Man your backwards/ Its a downside for the M/B manufactures. The m/b using intel should be alot cheaper than AMDs. AN upside for intel . Also the downside for M/B makers as less margine in the product as less to add to M/B . This is a big upside for intel . Intel is not trying to please O/Cers but the OEMs . Intels doing smart things since 2006 and doesn't seem to be a letdown .
 

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
This is a big upside for intel . Intel is not trying to please O/Cers but the OEMs


That part. Intel is doing this for no one but themselves and their partners. In the end it will end up worse for us tweakers and mobo makers in general. Have to wait for more details to know to what extent though.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
That part. Intel is doing this for no one but themselves and their partners. In the end it will end up worse for us tweakers and mobo makers in general. Have to wait for more details to know to what extent though.

We heard that year after year. Lynnfield wont OC, Intel limits OC in Sandy etc etc.

I bet the CMOS VRM will perform better.
 

jacktesterson

Diamond Member
Sep 28, 2001
5,493
3
81
Overclocking will take a hit most likely.

AMD, please get competitive again. Dreams away....