While it's possible that you could break up a CPU core between layers, there's a bit of latency that makes that problematic with our current methods.
It is entirely possible. Intel tested a version of Pentium 4 with it. It resulted in:
-Peak temp reduction of nearly 25C at same overall Temp
-30% power reduction
-8% faster, and 15% faster per clock(at 13C higher temps but still lower power)
-Reduction of a quarter of pipeline stages, which will be largely responsible for increased per/clk.
The problem is, while it might work for lower power designs, it'll be a thermal density problem for highest end parts. It is not practical without experimental methods such as micro fluidic channels.
lower power draw in cinebench MT on DLVR despite the actual patent showing it offered no benefits at high current draw
Here's my analysis of the patent:
https://forums.anandtech.com/thread...ure-lakes-rapids-thread.2509080/post-40772218
Knowing what "DLVR" stands for and spending 30-60mins reading the patent, even skimming it will tell you the reality. Also basic electronics.
DLVR stands for "Digital Linear Voltage Regulator". Key word: Linear. Linear is essentially a fancy resistor, meaning unlike a switched mode one, it reduces voltage by using power, resulting in heat dissipated.
The patent tells you the power saving is ONLY for bursty workloads. Before you had one regulator, and the CPU predicts what the next voltage required is. By having two regulators, you increase the current capacity and thus reduce the voltage because voltages are based on worst case scenarios(so the CPU doesn't crash). It only kicks in at high power though according to the patent, since having it activated all the time will result in worse efficiency, nevermind better.