Your opinion regarding boosting in this scenario is extremely valid and I completely agree with it. I do have a slight issue with how it was presented though. Generally the less effort the front end exerts, the more actual load gets placed on the core. I'm not talking GPU-z load numbers, but actual amount of work that gets done (and the heat that gets generated with it).
A lot of programs are designed to be as easy on the front end as possible. The most notorious of which is Furmark. No matter the framerate, the front end is basically telling the cores, "Hey remember what you just did? Do it again." Over and over and over again. A lot of programs get programmed this way on accident. Starcraft 2's menu screen being a great example.
A game engine on average is a decent amount tougher for the front end to schedule. Everything in general is just a little more complex with some mixing-it-up-action going on. One microsecond it could be tessellating a tree, the next drawing the shadow of the tree, the next applying post processing effects to the shadow. The front end needs to work a lot harder to keep the cores running smoothly.
If your game drops down to 10 frames per second from lets say 30, it means either there is a crapload of detail all of a sudden (explosion), or something stalled really bad. Most likely, both happened at once. While the front end is trying its hardest to keep the cores fed, a lot of the scene cannot get worked on until we receive our explosion texture from main memory. GPUz reports 100% load, but the cores really aren't doing that much work since most of them are stalled.
You can give millions of examples for a counter-point. But on average the longer it takes for a scene to get rendered, the GPU uses less power per unit of time than that same engine running at twice the framerate when both are 100% GPU bound.