Is it really true so? ...
Yes. I'm not even going to go through your entire reply because you have some fundamental misconceptions about how a cpu is designed, marketed, and funded. Also, I never mentioned x86 vs ARM because AMD and Intel's problems are (mostly) unrelated. It's a design philosophy issue, not an ISA issue. Maybe one of them grows a brain, changes their design approach, and it eventually becomes an ISA issue, but it's not there yet.
As a sidenote: It is not as trivial as you describe it. Energy efficiency is measured as total consumption for a given workload. So for your 5900HS to be 2x more efficient at F/2 it would need to consume only 1/4 average power as it will need twice the time for the workload.
You are invited to try this out for yourself. Just follow the download link in the OP.
It actually is that trivial. Let's go through the math. Keep in mind that this is a very simple napkin math type example to show how much impact the v/f curve has on efficiency.
By definition, power is proportional to capacitance * frequency * voltage^2. So initially, you might think that changing frequency doesn't impact power efficiency, but you're forgetting that frequency and voltage are also related. In general, you can assume that relationship is more or less linear, though it can break down at very high or very low voltages.
So with that in mind, power is actually more proportional to capacitance * frequency^3. So cutting frequency in half theoretically cuts power by 7/8, which is a 4x increase in efficiency. Obviously there are a lot of other factor that would influence this (leakage, uncore, shape of v/f curve), but in general this is close enough. Ex: with Alder Lake, you can run at half clocks for 1/7th of the power.
Now let's apply this thought process to M1 vs AMD/Intel. Again, this is only napkin math to show how powerful the cubic relationship's effect on power is, and I'm dramatically simplifying. Let's say AMD/Intel's chips run at 5 GHz with a capacitance of 3 units. Now let's say that they try to shift the operating region and reduce the target clock speed to 3 GHz. To make up for the performance, they want to boost IPC by 66%. Here, IPC and capacitance are what's linked. A well executed architecture should have IPC-cap scaling of around 1:1. Very well executed changes are better, and poor architectures do worse. But for simplicity's sake, let's assume they achieve a 1:1 ratio (not easy, but also not impossible), bumping up the capacitance from 3 units to 5. If you compare the power draw of the two approaches, the initial frequency-centric approach has a symbolic power draw of 3 * 5^3, while the second IPC-centric one has a symbolic power draw of 5 * 3^3. If you compare the two, the IPC-centric design approach has a power draw that's 64% lower than the frequency-centric one.
Like I said, I'm dramatically oversimplifying the cpu design process, but it's pretty obvious that by definition, chasing maximum clock speeds is ridiculously inefficient. Sadly, AMD and Intel marketing prefers this approach because uninformed consumers don't understand what IPC is. Frequency is just easier to market. I can make a whole new post just about that.