I ran few benchmarks while running HWINFO32. The latter might not be accurate but gives general sense of what's going on.
Programs used:
-LinX
-Cinebench R11.5
-FurMark
-HWINFO32
Cinebench + Furmark: The Cinebench score impact of running alongside Furmark is minimal. My normal score is 6.7 points, I got 6.5 and 6.6 points. HWINFO32 showed 16W for GPU, and the CPU is at 60W and rises 2-3W during the benchmark process.
Cinebench alone:The results are 6.7 points. CPU uses 60-64W.
Furmark alone: GPU used 14-15W
Linx alone: Most CPU demanding of programs, it alone uses close to 80W, and CPU fan starts spinning faster. Got 85GFlops.
Linx + Furmark: CPU runs at 74-77W, and the results are not much lower at 81GFlops. But the interesting thing is the iGPU. It uses 14-16W in the beginning, but few seconds later(probably when LinX starts doing computations), it drops to under 8W. Did 2 runs, and between runs the iGPU goes back to 14-16W then it drops back to 7.5W as computation starts. CPU fan spins faster than normal.
Conclusion: It looks like if anything, the power management system favors CPU over iGPU. Significant throttling is observed on the iGPU when both CPU and GPU intensive applications are run. The CPU performance impact is minimal. It's likely the CPU merely goes back to Base clocks.
On Ivy Bridge, the iGPU has a dedicated L3 cache. The primary purpose of that is to reduce power usage by not firing up the ring bus and the CPU's L3 cache, and that might help graphics performance when running CPU + GPU intensive applications.