• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Multi core processors are better for the battery life in the long run?

kyrax12

Platinum Member
After reading about the Samsung Galaxy SIII I scrolled down on a list of comments from digg and noticed there were a trend of people complaining about what the effects of a quad core processor will have on the phone. Primarily their complaints were on the battery life.

I read up on Nvidia Tegra 2 design documents and it listed that it utilizes a form of systematic distribution of workloads between the two cores. It was called symmetrical multiprocessing and Nvidia seems to boast more power and less energy consumption on its behalf. I am sure all multi core processor should have symmetrical multiprocessing.

Now I don't get why people are complaining about battery consumption of a quad core processor when it should be the other way around since the workload is behind divided into multiple cores; it doesn't require full energy output into one core therefore less energy consumption should take place.
 
Last edited:
I suppose there's different schools of thoughts out there. Even if you talk quad or dual or single core, there's the thought that if you quickly get your work done and reach IDLE, that you could save more power than to distribute the workload out.

There are those who claim that automatically dual core will give you more power benefits than single core, but honestly I think the case depends from phone to phone. If you're doing heavily threaded/multitasking applications, then sure, having multicore processors will help a LOT because the faster you get done with work, the more time you can spend on IDLE.

I wouldn't say it's a clear cut advantage from a power perspective though.
 
I think the case depends from phone to phone.

I agree. I think it will vary widely depending on the phone, and the tasks that each user is running on their phone. Also, there seems to be room for improvement with respect to optimizing for power consumption on the Android platform. As Google refines Android, I expect that the OS will become more power efficient with future releases.
 
Throw in some optimization a die shrink and I'd think it would be possible to make tomorrows quad core with a similar power profile of todays dual core.
 
Last edited:
Now I don't get why people are complaining about battery consumption of a quad core processor when it should be the other way around since the workload is behind divided into multiple cores; it doesn't require full energy output into one core therefore less energy consumption should take place.

Well, for example...

A single core enabled draws 25mW of power irregardless of what it does. Each additional 100MHz step sucks up 20mW each. At 1GHz, a single core would suck about 225mW, right?

With a second core, you'd think that if you divide that 1GHz workload in 2, it should be more saving, but... now look at it this way: a single core at 500MHz sucks 125mW. 2 cores running at 500MHz each suck a total of 250mW. Compare that to 225mW.

But hang on a second. If the task to be done can be completed in 1/2 the time, what happens? That 250mW should be cut in half, because the two cores don't have to keep sucking that much power as long as a single core.

In other words, it'll be about 125mW to complete a task on a dual-core device versus 225mW on a single-core device.

So I think you are absolutely right. There are cases where a multi-core design is more power efficient. However, that's only considering the workload is quite light, and there is nothing else keeping the cores running at higher frequency. In a perfect world, that would mean no live wallpaper, no background tasks, nothing, and unused cores are completely off as soon as their tasks are done. Otherwise, the added overhead of keeping some cores running to complete their tasks would effectively eliminate the power saving advantages of multi-core design. And in a high load situation, multi-core can still suck many times more power than single-core.

Some have suggested that the solution to background tasks is to include a lower-power core to accommodate higher-performing cores in handling repetitive small tasks, but I think that's not the real solution, and it just adds more to the power consumption of the whole chip.

In this case, Apple's decision to limit the processing window of any background task is quite a wise one, as it means their chip design can force cores to shut down after that window has closed. Not to say that their decision is the best, but it seems like it's quite beneficial to power consumption.
 
Well, for example...

A single core enabled draws 25mW of power irregardless of what it does. Each additional 100MHz step sucks up 20mW each. At 1GHz, a single core would suck about 225mW, right?

With a second core, you'd think that if you divide that 1GHz workload in 2, it should be more saving, but... now look at it this way: a single core at 500MHz sucks 125mW. 2 cores running at 500MHz each suck a total of 250mW. Compare that to 225mW.

But hang on a second. If the task to be done can be completed in 1/2 the time, what happens? That 250mW should be cut in half, because the two cores don't have to keep sucking that much power as long as a single core.

In other words, it'll be about 125mW to complete a task on a dual-core device versus 225mW on a single-core device.

So I think you are absolutely right. There are cases where a multi-core design is more power efficient. However, that's only considering the workload is quite light, and there is nothing else keeping the cores running at higher frequency. In a perfect world, that would mean no live wallpaper, no background tasks, nothing, and unused cores are completely off as soon as their tasks are done. Otherwise, the added overhead of keeping some cores running to complete their tasks would effectively eliminate the power saving advantages of multi-core design. And in a high load situation, multi-core can still suck many times more power than single-core.

Some have suggested that the solution to background tasks is to include a lower-power core to accommodate higher-performing cores in handling repetitive small tasks, but I think that's not the real solution, and it just adds more to the power consumption of the whole chip.

In this case, Apple's decision to limit the processing window of any background task is quite a wise one, as it means their chip design can force cores to shut down after that window has closed. Not to say that their decision is the best, but it seems like it's quite beneficial to power consumption.

So in conclusion only light workloads can take advantage of the less power consumption multi-core processors have than on single core processors.

Otherwise the complete opposite could occur when all the cores reach their max frequency when conducting heavy tasks such as gaming, videos..etc.

Interesting.

It basically split a line toward heavy and casual users..

Thought I think this really only apply to gamers.
 
Last edited:
A lot of people use setCPU or a similar program to force-clock their android phone at the lowest or 2nd lowest CPU frequency available when the screen is off. IE down to 250mhz. At this frequency, power consumption is much lower.

Example from my own OGDroid:
mhz vSel
250 28
600 40
I don't know precisely what the vsel settings are but it controls the voltage to the CPU. Under the assumption that voltage changes linearly with that vSel unit, at 250mhz and 40vSel the processor is using (40/28) = 43% more power.
at 600mhz that's 1.43*600/250 = 3.43x the power to run the chip.

Now if I had another core, then when the screen is on, it could remain clocked at the lowest frequency and voltage setting, and android could farm out all the background tasks to it, having it handle garbage collection, pushing applications out to disk if the phone is running low on RAM, etc, all while allowing the current application full access to its own core, improving the user experience.
 
In this case, Apple's decision to limit the processing window of any background task is quite a wise one, as it means their chip design can force cores to shut down after that window has closed. Not to say that their decision is the best, but it seems like it's quite beneficial to power consumption.

I foresee this becoming a BIG problem with Android. What if you get a lazy coder who has users complaining of poor performance, what does he do, he hard codes the program to just always ask for the highest performance state from the Android OS. Unless Google rigidly polices their app market like Apple does (they won't, and this is going to be yet another example of their utter failure to execute anything properly), then these apps are going to abound and we'll be stuck in the same state we are today where the user has to have intricate knowledge of the AndroidOS and apps he/she is running to get best performance from the phone.
 
So in conclusion only light workloads can take advantage of the less power consumption multi-core processors have than on single core processors.

Otherwise the complete opposite could occur when all the cores reach their max frequency when conducting heavy tasks such as gaming, videos..etc.

Interesting.

yes, but at the very least us techy-types will have the option of using CPU program XYZ to force Android OS to do all the background processing on the ultra efficient core when the screen is off.

The point of this ultra-power-efficient core is to make the "sleep" state of the phone use practically no battery power. This processor can be used as the sole processing unit any time direct user action is not required-- for example when the screen is off, like when a user is not using the phone; or when the user is in the middle of a call and the phone is next to his head.
 
I foresee this becoming a BIG problem with Android. What if you get a lazy coder who has users complaining of poor performance, what does he do, he hard codes the program to just always ask for the highest performance state from the Android OS. Unless Google rigidly polices their app market like Apple does (they won't, and this is going to be yet another example of their utter failure to execute anything properly), then these apps are going to abound and we'll be stuck in the same state we are today where the user has to have intricate knowledge of the AndroidOS and apps he/she is running to get best performance from the phone.

afaik an app dev can't "request performance states" from android.

or are you referring to wake locks?
 
afaik an app dev can't "request performance states" from android.

or are you referring to wake locks?

no
I must have been confusing this with the AndroidOS requesting the performance state from the hardware. Then, depending on the performance state requested, Tegra3 uses or doesn't use the ULP core for its processing.

NVIDIA handles all of the core juggling through its own firmware. Depending on the level of performance Android requests, NVIDIA will either enable the companion core or one or more of the four remaining A9s. The transition should be seamless to the OS and as all of the cores are equally capable, any apps you're running shouldn't know the difference between them.
 
Last edited:
Well, for example...

There's a small problem with your math. A core at 500 MHz completes the work half as fast or alternately worded, it takes twice as long; however, in your example this isn't considered.

Consider the following naive example where a task takes 100 million operations to complete and an operation can be completed every clock cycle.

A single core processor clocked at 1 GHz will complete this task in .1 seconds.

A single core processor clocked at 500 MHz will complete this task in .2 seconds.

Assuming we can split the work evenly between two cores at no cost, it will take two cores clocked at 500 MHz .1 seconds to complete the task.

The only question that remains is whether the cores are capable of operating more efficiently at 500 MHz than at 1 GHz. If this is true, splitting the load between two cores is more beneficial. If not, then you're better off with a single core.
 
no
I must have been confusing this with the AndroidOS requesting the performance state from the hardware. Then, depending on the performance state requested, Tegra3 uses or doesn't use the ULP core for its processing.

i think all that is handled within tegra 3 depending on how many threads are active/ number and types of instructions in flight etc. To android it just appears as 4 normal cpus at all times.

although i'm sure that sleep state with the quad powered down and the companion doing everything is in response to a screen off signal from android or the screen controller.
 
There's a small problem with your math. A core at 500 MHz completes the work half as fast or alternately worded, it takes twice as long; however, in your example this isn't considered.

Consider the following naive example where a task takes 100 million operations to complete and an operation can be completed every clock cycle.

A single core processor clocked at 1 GHz will complete this task in .1 seconds.

A single core processor clocked at 500 MHz will complete this task in .2 seconds.

Assuming we can split the work evenly between two cores at no cost, it will take two cores clocked at 500 MHz .1 seconds to complete the task.

And you are right. But it's also a small problem because in real world, there are operations that take multiple cycles to complete, and then you have to deal with the CPU core idling in between instructions that require it to do so. In those cases, slotting in extra instructions so the core doesn't stop in between would help boost performance quite a lot.

And if we are already considering a perfect scaling scenario, then there is no doubt that perfect scaling should occur in software level as well, which means that 2 logical CPUs should be able to fetch and write information twice as fast as a single logical CPU even though it may take more time to process that information.

That's assuming memory bandwidth is not a concern, of course. I can't quite say that with current mobile chips.
 
There's a small problem with your math. A core at 500 MHz completes the work half as fast or alternately worded, it takes twice as long; however, in your example this isn't considered.

Consider the following naive example where a task takes 100 million operations to complete and an operation can be completed every clock cycle.

A single core processor clocked at 1 GHz will complete this task in .1 seconds.

A single core processor clocked at 500 MHz will complete this task in .2 seconds.

Assuming we can split the work evenly between two cores at no cost, it will take two cores clocked at 500 MHz .1 seconds to complete the task.

The only question that remains is whether the cores are capable of operating more efficiently at 500 MHz than at 1 GHz. If this is true, splitting the load between two cores is more beneficial. If not, then you're better off with a single core.

the GHz war of the last decade between AMD and Intel says you are wrong. lots of evidence from the last few decades of high clock speed CPU's performing a lot worse than lower clocked ones
 
Well, for example...

A single core enabled draws 25mW of power irregardless of what it does. Each additional 100MHz step sucks up 20mW each. At 1GHz, a single core would suck about 225mW, right?

With a second core, you'd think that if you divide that 1GHz workload in 2, it should be more saving, but... now look at it this way: a single core at 500MHz sucks 125mW. 2 cores running at 500MHz each suck a total of 250mW. Compare that to 225mW.

But hang on a second. If the task to be done can be completed in 1/2 the time, what happens? That 250mW should be cut in half, because the two cores don't have to keep sucking that much power as long as a single core.

In other words, it'll be about 125mW to complete a task on a dual-core device versus 225mW on a single-core device.

You ignore the fact that you need a higher voltage for archiving the higher clocks. And an increase of the voltage result in ² of the power instead of the linearity of a higher clock.

You have two cores running at 500Mhz with 0,7V and each sucks 125mW. With a single core at 1000MHz you would need something around 1,1V (Tegra 2) which would result in nearly 3 times more power (1,57²x + a litte because of the high clock).
 
Last edited:
You ignore the fact that you need a higher voltage for archiving the higher clocks. And an increase of the voltage result in ² of the power instead of the linearity of a higher clock.

You have two cores running at 500Mhz with 0,7V and each sucks 125mW. With a single core at 1000MHz you would need something around 1,1V (Tegra 2) which would result in nearly 3 times more power (1,57²x + a litte because of the high clock).

Voltage is already factored in if Watt is used to represent power draw. Watt is Voltage x Current. If the Current remains relatively constant (as it should, that's what you need a power supply for), then an increase in Voltage means an increase in Wattage. So for a 500MHz core to run stable at 0.7v and use up 125mW (0.125W) in the process, that means the current running through has to be around 0.18 Amp. If that 0.18 Amp remains constant, then an increase to 1.25v results in a wattage of around 223mW (0.223W).

I'm not sure where you got that exponential...
 
And you are right. But it's also a small problem because in real world, there are operations that take multiple cycles to complete . . .

Which is why I said it was a naive example. It was merely to point out that there was an error in your original calculation. The point is that two cores running at half clock speed can't complete the task more quickly, but they are capable of doing it more efficiently, which is generally what's most important.

the GHz war of the last decade between AMD and Intel says you are wrong. lots of evidence from the last few decades of high clock speed CPU's performing a lot worse than lower clocked ones

Which has nothing to do with this argument. That kind of architectural difference doesn't exist within SoCs when everyone essentially uses the same ARM core. The examples can even be broken down to determine that given a dual core chip, whether it is better to use one core at full speed and keep the other completely shut off, or to use both cores a half speed. How is running the same chip at a lower clock going to improve performance?
 
Which is why I said it was a naive example. It was merely to point out that there was an error in your original calculation. The point is that two cores running at half clock speed can't complete the task more quickly, but they are capable of doing it more efficiently, which is generally what's most important.

Yeah, I see that now. Thanks for pointing it out.

But with that being the case, I don't think efficiency even factors in anymore because there is really no scenario in which a dual-core chip can surpass a single-core chip in terms of power consumption, assuming that both of them run on the same architecture.

From a performance standpoint, I can see that efficiency would be better for a dual-core design as certain threads can be prioritized and completed faster, thus making them more responsive to the user, but that's still at the expense of battery life because a dual-core design may still use slightly more power to complete the same task.
 
Voltage is already factored in if Watt is used to represent power draw. Watt is Voltage x Current. If the Current remains relatively constant (as it should, that's what you need a power supply for), then an increase in Voltage means an increase in Wattage. So for a 500MHz core to run stable at 0.7v and use up 125mW (0.125W) in the process, that means the current running through has to be around 0.18 Amp. If that 0.18 Amp remains constant, then an increase to 1.25v results in a wattage of around 223mW (0.223W).

I'm not sure where you got that exponential...

First thx for the information. Physic is not my best. 🙁

I remember something like this:
The switching power dissipated by a chip using static CMOS gates is C·V²·f, where C is the capacitance being switched per clock cycle, V is voltage, and f is the switching frequency,[1] so this part of the power consumption decreases quadratically with voltage.
http://en.wikipedia.org/wiki/Dynamic_voltage_scaling#Power
 
Last edited:
I foresee this becoming a BIG problem with Android. What if you get a lazy coder who has users complaining of poor performance, what does he do, he hard codes the program to just always ask for the highest performance state from the Android OS.

App developers can do whatever they want. Don't use the app if it does something you don't like. That is all there is to it, don't make it more complicated than it needs to be.
 
Voltage is already factored in if Watt is used to represent power draw . . .

Operating at higher frequencies generally requires some increase in voltage and alternatively, operating a lower frequencies generally can be done at lower voltages. Also the power used by CPUs depends not on voltage, but the square of the voltage, which is where the quadratic/polynomial (not exponential) growth comes into play.

Efficiency differences only come into play if operating at a lower clock speed can be done at a decreased voltage. For large shifts differences in clock speed, there is a difference, but for smaller differences, it's unlikely that much, if any, difference exists.
 
Depends on the power gating implementation when only one core is needed, and how well the software is threaded. If two cores can do the job of one clocked up core for less power, then you need the software to take advantage of it and IQ somewhere in the phone to determine when/if a split workload is more appropriate than a single core load.

All depends on the execution.
 
First thx for the information. Physic is not my best. 🙁

I remember something like this:
http://en.wikipedia.org/wiki/Dynamic_voltage_scaling#Power

Well, to be honest, I'm over-simplifying it, and my numbers shouldn't be taken for anything other than being a presentation for the actual formula. They aren't actual numbers, in other words.

P = V x I (Voltage x Current) calculates static power consumption.

You are right in that in real world, dynamic frequency switching would require a dynamic power consumption formula, which is what you just referenced:

P = C x V^2 x f

The actual calculation is more complicated, and requires a lot more than what Wikipedia just referenced. I think this will be a good read if you are interested:

http://www.ti.com/lit/an/scaa035b/scaa035b.pdf
 
Which is why I said it was a naive example. It was merely to point out that there was an error in your original calculation. The point is that two cores running at half clock speed can't complete the task more quickly, but they are capable of doing it more efficiently, which is generally what's most important.



Which has nothing to do with this argument. That kind of architectural difference doesn't exist within SoCs when everyone essentially uses the same ARM core. The examples can even be broken down to determine that given a dual core chip, whether it is better to use one core at full speed and keep the other completely shut off, or to use both cores a half speed. How is running the same chip at a lower clock going to improve performance?

intel vs AMD was also the exact same instruction set. and Intel's current CPU's outperform their 10 year old CPU's at half the GHz rating

same with ARM. all the CPU's support the same instruction set but they have their own architectures which shows up in the benchmarks
 
Back
Top