imported_ats
Senior member
- Mar 21, 2008
- 422
- 63
- 86
But you're saying they would be better off in all ways with fewer cores, so the scheduler could just keep the other cores powered off.
But it doesn't.
All scheduling is based on analyzing repetitive behaviors. Always has been. Saying that it'd need an oracle to be effective is meaningless without actually data that shows that it's a loss with subpar scheduling.
No, most scheduling isn't based on analyzing repetitive behaviors. Certainly not the schedulers used in linux and esp wrt power.
Okay, so when you say Fmax @ Vmin you mean the maximum frequency that a particular voltage can support and Fmin as the lowest frequency that voltage can support (and in this case Fmin should be the same for all voltages) Usually when I've seen Vmin/Vmax it refers to global limits irrespective of frequency.
If you see Fmin/Fmax or Vmin/Vmax independently then they are usually talking about the absolute limits, things like Fmax@Vmin or Fmax@1V denote sub-sectioned data.
What in the article leads you to think that they are ever running at a higher voltage than the binning of the chip and power management dictates is allowed that frequency?
Because Vmin is a very real thing and it is highly unlikely that the core/process is designed to scale down Vdd down much below .7-.75 if it can even go that low. AKA there is a hard floor to Vmin and you tend to have DFVS frequency scaling well below Fmax@Vmin. This is useful for when a core is processing limited data but much be in a keep alive state, sure it uses less power than Fmax@Vmin, but from a perf/w perspective its worse than Fmax@Vmin. Its done primarily when perf doesn't matter.
Fmin for the big core is at 800MHz and if you look at frequency tables voltage keeps scaling all the way down until this point, which is why it switches to the little core below that, which have Vmin at a substantially lower frequency. Yes there's no point decreasing frequency below what you can scale voltage. What makes you think that's happening here?
What makes you think it isn't happening...
Again. The graph at the end. I don't know why you keep ignoring it. It shows perf/W increases with lower perf over a huge dynamic range.
Which tells us nothing about if b.L is good or bad. Nor if the workload being profiled has any relevance to the workloads being used.
You're making broad statements about costs of context switches and cache flushes but the fact is that this only applies for as much as you actually switch clusters. And a lot of loads hit steady states where they don't switch for a long time, if ever. This is evident in the data.
The data presented is completely insufficient to make an argument one way or another. Though the reason they don't switch is that they've had to heavily bias things because switching is so costly.