Why do power supplies/mobos still mostly use the 12V rail?

Markey

Junior Member
Sep 28, 2010
20
0
0
So if I have my facts straight, PCIe mainly uses 3.3V and CPUs/chipsets/memory use less than 1.5 volts. However, ATX supplies still put most of their power on the 12V rail and then motherboards have to downconvert to the lower voltages using expensive and wasteful voltage regulators. If there were some new power supply that put most of its current on the 3.3V rail, then PCIe could run straight off the supply and the CPU/chipset/memory could run much more efficeint and cooler voltage regulators to downconvert to the appropriate voltage.

Why haven't we seen this transition? Currently about 20 dollars or so in parts go into the voltage converters on motherboards.
 

PreferLinux

Senior member
Dec 29, 2010
420
0
0
Actually, if you look, there has been a transition towards high 12 V currents. Most PCIe devices are graphics cards, and they mostly use 12 V (as in they use less than 1.5 V with VRM). The CPU voltage regulators used to run off 5 V, but they changed to 12 V.

Voltage regulators are actually pretty efficient. Like probably over 90%. And if you work out how much current they use, you can understand why they are run off 12 V (it increases efficiency). Also, the efficiency won't change much at all with differing input voltages.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
The most common two voltages used by chips now are 3.3V and 1.8V . Most supplies convert the 12V down to that. The reason for 12V from the supply and not 3.3V is due to the need for high currents. 1.8V @ 40A requires a very thick wire to supply that voltage and not lose power.

Power supplies in computers right now are regulated at the source. Meaning the supply determines the output voltage level based on what the voltages are at the soldered connection on the power supply. To do 1.8V or low voltages you would also have to switch those supplies to feedback regulated supplies. On a feedback supply there is a wire that goes from the device like a hard drive back to the supply that is used only to determine what voltage is being received at the device, it carries no power itself. The idea is that if the supply is outputting 12.0V at the solder connection but the feedback wire is reading 11.94V then the supply knows to increase the voltage by .06 volts. It isn't done this way now because the differences don't matter much. Get down to low voltages like 1.8V though and the difference between 1.8V and 1.7V can make something stop working.
 

Markey

Junior Member
Sep 28, 2010
20
0
0
The most common two voltages used by chips now are 3.3V and 1.8V . Most supplies convert the 12V down to that. The reason for 12V from the supply and not 3.3V is due to the need for high currents. 1.8V @ 40A requires a very thick wire to supply that voltage and not lose power.

Power supplies in computers right now are regulated at the source. Meaning the supply determines the output voltage level based on what the voltages are at the soldered connection on the power supply. To do 1.8V or low voltages you would also have to switch those supplies to feedback regulated supplies. On a feedback supply there is a wire that goes from the device like a hard drive back to the supply that is used only to determine what voltage is being received at the device, it carries no power itself. The idea is that if the supply is outputting 12.0V at the solder connection but the feedback wire is reading 11.94V then the supply knows to increase the voltage by .06 volts. It isn't done this way now because the differences don't matter much. Get down to low voltages like 1.8V though and the difference between 1.8V and 1.7V can make something stop working.


I forgot about the need for a thick wire. That makes sense. As for the feedback wire, I bet that CPUs already do this, it's just that the feedback wire only has to go from the CPU to the voltage regulator on the motherboard rather than running the trace all the way back to the supply. Now that I think about it, if the power supply did have a feedback wire, it would need one for each "rail." i.e. the CPU would need one, the video card would need one, the memory would need one, the chipset would need one etc. That would result in a lot of wires running back to the PSU. I guess it does make sense to run everything off the 12V rail.

For anyone else that's interested, I looked up a few voltage regulators, and PreferLinux is right. There are parts like the LTC1553 that can downconvert 12V to CPU voltage at >90% efficiency. That's impressive. My original question was based on the experience I had with the LM317s I played with years ago where the efficiency would go down (and the chip would get hotter) with a higher voltage differential between input and output. For example if I put 12V in and I wired it for 1.5V out, it would have horrible efficency. I guess power technology has come a long way since I last played with it.

:)

Thanks for the informed responses!
Brian
 

PreferLinux

Senior member
Dec 29, 2010
420
0
0
For anyone else that's interested, I looked up a few voltage regulators, and PreferLinux is right. There are parts like the LTC1553 that can downconvert 12V to CPU voltage at >90% efficiency. That's impressive. My original question was based on the experience I had with the LM317s I played with years ago where the efficiency would go down (and the chip would get hotter) with a higher voltage differential between input and output. For example if I put 12V in and I wired it for 1.5V out, it would have horrible efficency. I guess power technology has come a long way since I last played with it.
That would be a linear regulator rather than a switching regulator probably. In other words, it basically wastes the extra voltage – same current on the input as output.