so how is a rtc implemented? they don't just have a rtc module ( 32768hz xtal + battery + a ic / southbridge integration )? i thought software just reads a few registers which wouldn't be affected by an oc.
The RTC module is a battery backed quartz clock, typically similar to that in a digital wristwatch. It connects via a serial port, or similar method to a system bus, so that the OS/BIOS can read/write the time from it.
OSs have tended only to read the RTC occasionally. Historically, they would then keep track of time using a timer interrupt usually generated by the same RTC (e.g. a 60Hz timer, which is why a lot of old DOS programs could only time to the nearest 1/60 second). The OS could then perform adjustments for time zones, network sync as needed, and write them back to the RTC.
Windows has for many years been able to "skew" it's internal clock, to allow it to stay in sync with a reliable time source; if you are synced to a network time server, windows doesn't just set the time to the server value, it calculates how much the clock is gaining/losing and applies a correction in realtime, so that any drift before the next check should be minimal.
I'm not sure what changes Windows 8 has made. However, it wouldn't surprise me if MS were trying to get away from the old 60 Hz timer system, and instead were trying to push as much of their timing onto high-precision timers. In Win XP, if you used the regular programming API to get the time, you'd get 1/60 second precision; however, a second method was sometimes available to provide access to a high precision timer (originally, a CPU clock cycle counter, but later a chipset provided timer to get around problems of variable CPU clocks).
If you use MS's .NET programming framework, the default timing method is automatically via a high-precision timer, but it will automatically fallback to 1/60 sec timing if a higfh-preicisin hardware timer isn't available.
I would not be surprised if MS in Windows 8 had made high-performance timers the default hardware manager for all time-keeping requests.
The issue with high precision timers, is that they tend to be chipset or CPU based. In Intel core i platforms, the chipset timer is drive from the single master clock generator for all system buses. During OS boot, the system will read the timer frequency from the chipset; but the OS does not expect timer frequency to change in operation; a "live" bus clock reconfiguration can therefore lead to timekeeping errors.