Bubbleawsome
Diamond Member
- Apr 14, 2013
- 4,834
- 1,204
- 146
So you're saying HPET is separate from Invariant TSC in terms of functionality, but everything I've managed to find seems to say that Invariant TSC is a replacement for HPET.
HPET came out almost 10 years ago so it's not exactly new technology. And as far as I'm concerned, the stability or power efficiency of my rig hasn't been compromised by turning HPET off, as it was never in use to begin with.
Yes, HPET can cause plenty of problems. When I forced Windows to use it, it resulted in tons of lag and stuttering in games.
I'm trying to figure this one out myself. Windows definitely isn't using it whether it's enabled or disabled in the BIOS.
The LAPIC timer is even older than HPET.
...
Like I mentioned earlier, the technology is nearly 10 years old at this point. Better things have been created since then..
Not really. Windows will use it, if it's there, and it can cause problems. It would find it and use it on my GA-P35-DS3R, and cause exactly the described problems: in-game stuttering, and audio skipping.Forcing it's usage is a bit of leap of faith, isn't it? But advocating it's disabling in BIOS is too much too.
TSC is just 64bit number that is incremented each clock tick despite power saving mode or actual clock of CPU.
Invariant TSC is a full blown timer. If it wasn't, then HPET would be used by the OS and I would suffer severe consequences by disabling it in the BIOS.Time services on windows have undergone changes with any new version of Windows. Considerable changes are to be reported beyond VISTA and Server 2008. The synchronous progress in hardware and software development requires the software to stay compatible with a whole variety of hardware platforms. On the other hand new hardware enables the software to conquer better performance. Today's hardware provides the High Precision Event Timer (HPET) and an invariant Time Stamp Counter (TSC). The variety of timers is described in "Guidelines For Providing Multimedia Timer Support". The "IA-PC HPET Specification" is now more than 10 years old and some of the goals have not yet been reached (e.g. aperiodic interrupts). While QueryPerformanceCounter benefited using the HPET/TSC when compared to ACPI PM timer, these days the HPET is outdated by the invariant TSC for many applications. However, the typical HPET signature (TimeIncrement of the function GetSystemTimeAdjustment() and MinimumResolution of the function NtQueryTimerResolution() are 156001) disappeared with Windows 8.1. Windows 8.1 goes back to the roots; it goes back to 156250. The TSC frequency is calibrated against HPET periods to finally get proper timekeeping.
An existing invariant TSC influences the behavior of GetSystemTimeAsFileTime() noticeable. The influence to the functions QueryPerformanceCounter() and QueryPerformanceFrequency() is described in sections 2.4.3. and 2.4.4. Windows 8 introduces the function GetSystemTimePreciseAsFileTime() "with the highest possible level of precision (<1us)". This seems the counterpart to the linux gettimeofday() function.
Why is it stupid if there's no consequences? I haven't noticed ANYTHING out of the ordinary since turning it off. My music, videos play just fine as before, but my games are even smoother and my overall system responsiveness seems to have improved.Not sure if anyone claimed that stability or power efficiency would be affected after disabling HPET, more like patiently explaining that it is clock event source where invariant TSC isn't. It is not being used by default and therefore it is stupid to disable it?
That was for testing purposes when I forced it's use, and found out that HPET is completely inferior to TSC. And disabling it would be bad too if there were consequences associated with it, but insofar, I haven't noticed any.Forcing it's usage is a bit of leap of faith, isn't it? But advocating it's disabling in BIOS is too much too.
One thing I don't get w/ HPET, though, is this: why not create such a timer as a new PIC, with either its own clock source, or hardware guaranteed synchronization with some other hardware clock source? Even better would be to have the CPU sync'd to it, so a program/driver could read the current time, an compare it to the event time, as well.
I went to turn this off in the bios ,but seems the last bios update done for 780sli , there is no longer a user option and is in device manager lol lol
love asus but there lack of bios changes /change logs \ bios menu's are also bs at least in English text. a dog could do better programing imo for rog boards
AFAIK, no, but it should allow for something similar (I meant actually integrate all of that but the clock itself into the PIC). If TSC were amended to be part of the CPU, with a single clock source available to all cores, always counting at the same rate, then yes, TSC would do it. However, TSC offers no such guarantee, so even if most CPUs do it that way now, it's a huge compatibility concern to assume that, which could result in data corruption, lockups, BSODs, kernel panics, etc..Isn't that what invariant TSC does?
AFAIK, no, but it should allow for something similar (I meant actually integrate all of that but the clock itself into the PIC). If TSC were amended to be part of the CPU, with a single clock source available to all cores, always counting at the same rate, then yes, TSC would do it. However, TSC offers no such guarantee, so even if most CPUs do it that way now, it's a huge compatibility concern to assume that, which could result in data corruption, lockups, BSODs, kernel panics, etc..
HPET is newer than TSC, but not newer than invariant TSC. Invariant TSC is only available in modern CPUs starting with Nehalem on Intel's side and most likely Bulldozer on AMD's.HPET is newer than TSC. I just mean that HPET's implementation is so basic that it requires overly-much software complication to use well, which seems rather silly in the face of Moore's Law, especially by the '00s, when regular CPUs were already hundreds of millions of xtors.
Invariant TSC is a full blown timer. If it wasn't, then HPET would be used by the OS and I would suffer severe consequences by disabling it in the BIOS.
It also seems to be OS dependent. Windows 8.1 which is what I have, seems to have deprecated HPET functionality. So HPET is going the way of the dodo.
Are you just repeating "Invariant TSC" all over like it is some magic word, without actually knowing what it is? I've first read TSC value in 2003, on P4 with ASM in some network timing project i needed (and ironically same code was having problems several years later on P-M Dothan mobile cpu, cause TSC was getting stopped in low power CPU modes - it not "invariant").
OK I concede your point. When I first read that, I was wondering whether it could mean that HPET would still be necessary, even though it wasn't being used as the primary timer.It means that TSC calibration (basically how many ticks of CPU TSC "invariant" clock mean how many time units in nanoseconds) is calibrated using known period of HPET (since it is clocked at 14mhz+, resolution is obviuosly great enough). So Windows 8.1 is actually making use of HPET for initial calibration, and if it is not available (like disabled in BIOS?), it will fallback to less accurate source of timing ( LAPIC or PIT).
Yeah I'm going to have to agree with you. After thinking about it, it may have been another setting that caused my performance to increase. I changed several settings that day, mostly voltages, so who knows.. I'm going to have to do a full backtrack when I have the time.Anyway this discussion is pointless, HPET is not beeing used by default for time event source, so it is pointless to disable it in BIOS. It is risky to force enable it in Windows, cause of incompability with some motherboards, but you could try doing so if your CPU is older. Those are known facts...
Invariant TSC is only in modern processors like the Intel Core i5/i7 series, Bulldozer and Piledriver. The TSC you're talking about is a much older form.
The 15ms (actually 15.625ms) resolution is not really a problem of the underlying timer hardware, but of Windows kernels. At least up to Win7, the "master tick" of the kernel runs at this speed of 1/64s. Many of the timer functions in Windows run off this master clock. You can check out the resolution of different methods by downloading the binaries of this CodeProject example. While Timers.Timer and Thread.Sleep deliver 1ms precision on my Win7SP1 system, you can't really rely on that; older systems may well deliver only 1/64s resolution - as do the Form.Timer and Thread.Timer tests on my system.Yeah, but it was still cool to have clock tick style timing on P4 back then! Much better than standard windows 2000 era stuff of 15ms resolution i think (i remember some stuff about multimedia "timers" that were insanely unworkable and seemed to have 0 reliability, TSC was major step forward)
They probably have the TSC on the parts of the uncore which never go to sleep, and the core accesses that value when a RDTSC instruction is issued. Considering that RDTSCs have a latency of 20-30 cycles on Nehalem+, there should be enough time for an uncore access.I think Nehalem was the first CPU with invariant TSC ( one that ran at same clock on all cores and sockets and had same value everywhere ). And that's why I called it marvel for it's day - honestly i have no idea how they do this magic of synchronization.
So time keeping/reading issues were solved, but timer problems still remain to this day. There is a mess of HPET/LAPIC/PIT timers available, each with their own quirks etc.
Well, what kind of new capability will this give for desktop systems? I can see use for that in embedded systems, when you might want to sleep for extended periods of time and make use of the TSCs 64-bit resolution. But the APIC timer already has a one-shot mode on it's regular 32-bit register, which is enough to wait for 42.95 seconds on a busclock of 100MHz even when the Divide Configuration Register is set to divide by 1; you can set the DCR to a divider of 128, which gives a maximum time span of 91.6 minutes (if I interpret Intels docs correctly).EDIT: BTW i've read somewhere that LAPIC timers have operational mode now, where you can set them to fire at certain TSC value, that is kinda best possible when combined with Invariant TSC, I really hope future OS's will start using this and solve timer related problems for good.
Note that the TSCs for NUMA systems may not necessarily run sychnronized, Intel provides a user-programmable TSC_AUX register for use with the RDTSCP instruction, so that software can keep track of a thread switching cores/sockets during execution and handling the non-uniform TSC readings accordingly.
It could be that the processor is no longer allowed to sleep (to conserve power), and you are seeing the effects of Intel's CPU power-management (or rather the lack thereof) on I/O performance.
Or, it could be placebo.
How does it effect with Windows XP, and a 2500k?