Originally posted by: CTho9305
By doing an RDTSC and reading the system clock, then another RDTSC and another system clock read, you could probably detect frequency variation of the CPU's PLL / the PLL generating the HT ref or FSB clock.
I don't think so. That's exactly the problematic approach I'm talking about. If you write code to read the tick counter every second, the "one second" intervals are going to be generated by the same clock you're trying to measure. If you expect (say) 1 billion ticks every second, and the code is executed at perfectly consistent intervals (from the CPUs point of view - i.e., every 1 billion ticks) then such code would always see the tick counter increase by 1 billion ticks, regardless of how long the "one second" interval really was. No matter how much the clock wobbled, the software would always see it as perfectly accurate.
On the other hand, if the clock is steady but the software isn't getting executed at consistent intervals, then one would expect to see exactly the sort of fluctuation one does see in CPU-Z et al.
there must be some tricks to avoid bogus readings due to interrupts / I/O traffic.
A modern PC CPU has all sorts of cache, prefetching, microcode decoding and caching going on. Those things give the CPU tremendous throughput, but lousy predictability at small timescales. On top of that, Windows has historically had very unpredictable interrupt response times. There may not be a way to measure a modern CPU's clock accurately in a Windows program.
A real-time operating system would fare better, but even those only guarantee that software events will happen within certain time-windows, not that they'll happen at a particular clock tick.