Why do clock speeds fluctuate?

Maximilian

Lifer
Feb 8, 2004
12,604
15
81
I notice it when looking at CPU-Z, its not speedstep for the record, its the fact that the clock speed is supposed to be 1600mhz, but its actually 1595, and the FSB is at 797.xx and it fluctuates a bit. My opteron 170 did that as well, but it was usually 5-10mhz over the rated speed.

What causes this? Im not complaining or anything, just curious. Did this happen back in the days of 333mhz pentium II's with 66mhz FSB's?
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
I'd imagine that most clock generators in consumer level gear fluctuate within a certain tolerance. I've also noticed that systems with cheaper PSUs or with mobo's with crappier VRM circuitry fluctuate more.
 

Jjoshua2

Senior member
Mar 24, 2006
635
1
76
I've wondered about this too. Is it a factor in overclocking instability? I upgraded to an expensive corsair psu not for that, but I haven't noticed an improvement.
 

Zoomer

Senior member
Dec 1, 1999
257
0
76
These frequencies are based off a lowly 4.77Ghz clock crystal on your board. While the crystal is supposed to be (just about) perfect, after stepping it up this up, over a covulated system, it just got thrown off a little.
 

Maximilian

Lifer
Feb 8, 2004
12,604
15
81
Originally posted by: Zoomer
These frequencies are based off a lowly 4.77Ghz clock crystal on your board. While the crystal is supposed to be (just about) perfect, after stepping it up this up, over a covulated system, it just got thrown off a little.

Can this crystal be overclocked? Do they come in faster varients than 4.77ghz?
 

heyheybooboo

Diamond Member
Jun 29, 2007
6,278
0
0
Originally posted by: Zoomer
These frequencies are based off a lowly 4.77Ghz clock crystal on your board. While the crystal is supposed to be (just about) perfect, after stepping it up this up, over a covulated system, it just got thrown off a little.

don't you wish . . . :D

4.77MHz - which was around 18-19 clock interrupts per second - which was all the original PC processors could handle 'back in the day'

 

Zoomer

Senior member
Dec 1, 1999
257
0
76
Sorry, that's a typo. I meant Mhz, of course. :)

The crystal would probably shatter at 4.77Ghz.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
They are replaceable (if you have soldering skills), but I wouldn't be surprised if you ran into different issues than ones normally encountered while OCing*. If you're really interested, check out this article: http://www.overclockers.com/tips745/ or google "TurboPLL" (I think that's what people called it back in the day...)

*The reason I say this is that I would expect the result to overclock substantially more of the system than normal OC methods. For example, if the USB rates are derived from this crystal, they would also speed up. It looks like the whole reason the TurboPLL stuff requires extra hacking is to avoid these issues.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Originally posted by: Soviet
I notice it when looking at CPU-Z, its not speedstep for the record, its the fact that the clock speed is supposed to be 1600mhz, but its actually 1595, and the FSB is at 797.xx and it fluctuates a bit. My opteron 170 did that as well, but it was usually 5-10mhz over the rated speed.

What causes this? Im not complaining or anything, just curious. Did this happen back in the days of 333mhz pentium II's with 66mhz FSB's?

The clock speeds don't fluctuate, if they did you would have a seriously broken system. What fluctuates is your measurment of the clock speeds.

 

Lagged2Death

Junior Member
Nov 14, 2007
7
0
0
Originally posted by: Phynaz
The clock speeds don't fluctuate, if they did you would have a seriously broken system. What fluctuates is your measurment of the clock speeds.
Bingo. Even the cheapest crystal oscillators are very, very stable. On the other hand, complex desktop OSes generally aren't very good at timing things perfectly consistently.
 

Maximilian

Lifer
Feb 8, 2004
12,604
15
81
Originally posted by: Phynaz
Originally posted by: Soviet
I notice it when looking at CPU-Z, its not speedstep for the record, its the fact that the clock speed is supposed to be 1600mhz, but its actually 1595, and the FSB is at 797.xx and it fluctuates a bit. My opteron 170 did that as well, but it was usually 5-10mhz over the rated speed.

What causes this? Im not complaining or anything, just curious. Did this happen back in the days of 333mhz pentium II's with 66mhz FSB's?

The clock speeds don't fluctuate, if they did you would have a seriously broken system. What fluctuates is your measurment of the clock speeds.

I see. So my FSB is actually 800mhz and clock speed is 2000mhz, its only fluctuating slightly due to the software. Awesome.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: Lagged2Death
Originally posted by: Phynaz
The clock speeds don't fluctuate, if they did you would have a seriously broken system. What fluctuates is your measurment of the clock speeds.
Bingo. Even the cheapest crystal oscillators are very, very stable. On the other hand, complex desktop OSes generally aren't very good at timing things perfectly consistently.

Not true - they definitely move around if you have spread spectrum enabled. Even with spread spectrum disabled I would expect a little bit of variation - PLL design is probably more rocket science than rocket science itself is nowadays. The crystal oscillator may be spot-on, but the output of the PLLs likely aren't.
 

Lagged2Death

Junior Member
Nov 14, 2007
7
0
0
Originally posted by: CTho9305
Not true - they definitely move around if you have spread spectrum enabled.
Well, sure, isn't that the whole point of spread-spectrum?

If the clock fluctuation displayed in something like CPU-Z is real, then how could the software -- which is marching along in perfect lock-step with that fluctuating clock -- possibly see it?

 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: Lagged2Death
Originally posted by: CTho9305
Not true - they definitely move around if you have spread spectrum enabled.
Well, sure, isn't that the whole point of spread-spectrum?

If the clock fluctuation displayed in something like CPU-Z is real, then how could the software -- which is marching along in perfect lock-step with that fluctuating clock -- possibly see it?

There are multiple PLLs in a system. The system clock apparently runs off the 14MHz crystal directly, so it should be pretty stable. By doing an RDTSC and reading the system clock, then another RDTSC and another system clock read, you could probably detect frequency variation of the CPU's PLL / the PLL generating the HT ref or FSB clock. I don't know how these apps really measure it though... there must be some tricks to avoid bogus readings due to interrupts / I/O traffic.

Barcelona apparently has a PLL for each core and the northbridge... shouldn't be too hard to read various performance monitors or timestamp counters to monitor their relative frequencies.
 

Lagged2Death

Junior Member
Nov 14, 2007
7
0
0
Originally posted by: CTho9305
By doing an RDTSC and reading the system clock, then another RDTSC and another system clock read, you could probably detect frequency variation of the CPU's PLL / the PLL generating the HT ref or FSB clock.
I don't think so. That's exactly the problematic approach I'm talking about. If you write code to read the tick counter every second, the "one second" intervals are going to be generated by the same clock you're trying to measure. If you expect (say) 1 billion ticks every second, and the code is executed at perfectly consistent intervals (from the CPUs point of view - i.e., every 1 billion ticks) then such code would always see the tick counter increase by 1 billion ticks, regardless of how long the "one second" interval really was. No matter how much the clock wobbled, the software would always see it as perfectly accurate.

On the other hand, if the clock is steady but the software isn't getting executed at consistent intervals, then one would expect to see exactly the sort of fluctuation one does see in CPU-Z et al.

there must be some tricks to avoid bogus readings due to interrupts / I/O traffic.
A modern PC CPU has all sorts of cache, prefetching, microcode decoding and caching going on. Those things give the CPU tremendous throughput, but lousy predictability at small timescales. On top of that, Windows has historically had very unpredictable interrupt response times. There may not be a way to measure a modern CPU's clock accurately in a Windows program.

A real-time operating system would fare better, but even those only guarantee that software events will happen within certain time-windows, not that they'll happen at a particular clock tick.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: Lagged2Death
Originally posted by: CTho9305
By doing an RDTSC and reading the system clock, then another RDTSC and another system clock read, you could probably detect frequency variation of the CPU's PLL / the PLL generating the HT ref or FSB clock.
I don't think so. That's exactly the problematic approach I'm talking about. If you write code to read the tick counter every second, the "one second" intervals are going to be generated by the same clock you're trying to measure. If you expect (say) 1 billion ticks every second, and the code is executed at perfectly consistent intervals (from the CPUs point of view - i.e., every 1 billion ticks) then such code would always see the tick counter increase by 1 billion ticks, regardless of how long the "one second" interval really was. No matter how much the clock wobbled, the software would always see it as perfectly accurate.

No, if the CPU's PLL drifts a relative to its reference clock, you could read the system clock (which is derived from the oscillator) and count cycles and compare the two. I don't know if there'd be a way to detect short-term drift in the crystal oscillator though.