Please see Transmeta's paper on processor and power variations:
http://www.design-reuse.com/article...ent-leakage-control-process-compensation.html
Process/processor Vth and the SPICE Box is key to the scaling characteristics of the end product, and it's power.
P rarely equates to v^2*f below 45nm.
Yes, P equates to V^1.7 to V^3 for general roughing, depending on the process and parametric choices, and where in the curve that Vdd falls... *f
Unless you know the k*C values, in which case you have a more accurate profiling with k*C*v^n where n=the relevant voltage factor.
C differs much more than it used to at the smaller nodes while k is a process dependent constant.
For approximations with much data, in terms of IVB, for instance, P scales at ~V^1.75*f until you hit 4.5GHz -- the dead point of diminishing scaling.
Lastly, the Vdd values used in end shipping products is not determined by what we users find in OC/UC, but process wide averaged testing using thousands of samples that fit the median Gaussian distribution (bell curve). End user testing cannot account for this. This Vdd is again linked to the above unknowns/yields.
So we end up with two major factors yet unknown.
Eg. AMD had Agena 65nm running no lower than1.35V for 2.2GHz.
We users could get 1.15-1.20V.
Penryn was even more ridiculous.
However, Agena needed, end user, 1.5V to go >2.35GHz. It was pumped to the pipe in launch. This was a processor designed for 3GHz.
Due to the process/processor parametrics, scaling, from a DT users perspective, was not there.
This is something you cannot approximate until you know the process with some end product data.
Three major unknowns kill off these power and frequency scaling approximations. Until we have a starting datapoint.
The way it is looking with Zen, and this is just a guesstimate that could be completely wrong, it would be easier for AMD to do a Magny Cour than a Piledriver at this stage. Hence my tentative agreement with Abwx's end comment.
Sent from HTC 10
(Opinions are own)