I finally bought another meter to monitor power from the wall on my 3900X. My other meter is connected to a 3700X. Both computers are running a mixture of MIP1 and MCM tasks from WCG at full load at base clocks. Both computers are running linux.
3900X@ 3.8 GHz (TDP =105 "thermal watts") pulls 212 watts at the wall.
Other details: Gigabyte X570 MB, 2 - 16 GB RAM modules @ 3200, dual 1080 Ti graphics cards (idle), 2 NVME drives, 2 SSD drives, 2 TB spinner and 4 fans. I'm not sure which model of Seasonic PSU is installed.
3700X@3.7 GHz (TDP =65 "thermal watts") pulls 127 watts at the wall.
Other details: Gigabyte X570 MB, 2 - 8 GB RAM modules @ 3200, dual graphics cards (RTX 2700 and 2700 super, idle), 1 NVME drive and a corsair 850 watt PSU (gold+).
I thought the power draw from the wall on both computers was a little high. The MIP1 app is rosetta so I know it is not using sse/avx instruction sets but I'm wondering if the MCM app is optimized to use instruction sets.
Anyway, my fudge factor for these 2 cpus @ base clock is Power at the wall (watts) = 2 x TDP. Obviously it will be app dependent.
If I suspend WCG work on the 3900X, the "idle" power draw is 160 watts. This seems high doesn't it?