- Apr 4, 2001
- 2,776
- 0
- 0
i'm trying to code on a UNIX based system, but i can't seem to get the computer to output time precisely enough so that i can see how long it's taking for me to execute a function. Can this be done?
Originally posted by: glugglug
The fact your gettimeofday calls are giving you better than 10ms resolution suggests the call is smart enough to look at the current value on the counter being incremented, not just the system time value updated on interrupts. This surprises me a lot. Is there an optional flag on how that call works when installing Linux?
/*
* This version of gettimeofday has microsecond resolution
* and better than microsecond precision on fast x86 machines with TSC.
*/
void do_gettimeofday(struct timeval *tv)
{
unsigned long seq;
unsigned long usec, sec;
unsigned long max_ntp_tick;
do {
unsigned long lost;
seq = read_seqbegin(&xtime_lock);
usec = cur_timer->get_offset();
lost = jiffies - wall_jiffies;
/*
* If time_adjust is negative then NTP is slowing the clock
* so make sure not to go into next possible interval.
* Better to lose some accuracy than have time go backwards..
*/
if (unlikely(time_adjust < 0)) {
max_ntp_tick = (USEC_PER_SEC / HZ) - tickadj;
usec = min(usec, max_ntp_tick);
if (lost)
usec += lost * max_ntp_tick;
}
else if (unlikely(lost))
usec += lost * (USEC_PER_SEC / HZ);
sec = xtime.tv_sec;
usec += (xtime.tv_nsec / 1000);
} while (read_seqretry(&xtime_lock, seq));
while (usec >= 1000000) {
usec -= 1000000;
sec++;
}
tv->tv_sec = sec;
tv->tv_usec = usec;
}
Originally posted by: glugglug
The time you get reading the system clock jumps in increments ranging from 10ms to 56ms (on PCs) depending on how your particular OS programs the clock interrupt.
If you are using an x86 chip, your best bet is to use the RDTSC instruction. Google that and you will find some inline assembly to make a function to call to read it. This gives you the # of CPU cycles the machine has been running as a 64-bit integer on everything since the original Pentium, so just divide by your clock speed to convert to seconds.
Originally posted by: stephbu
In Win32 the CPU high-resolution timers attributes are also exposed in the NT Performance Counter API set. QueryPerformanceCounter() and QueryPerformanceFrequency() API return the appropriate data.
I wonder what timer they are actually using here. Oddly enough in WinNT/2k/XP the reported resolution from QueryPerformanceFrequency is about 3.6MHz, while in Win98/ME it wraps the RDTSC instruction so the resolution is the clockspeed of your CPU. Also the calls to QueryPerformanceCounter take roughly 2000 CPU cycles each in 2K/XP. I'm guessing they no longer use the CPU instruction in case you have a non-symmetric multi-proc system?
wonder what timer they are actually using here. Oddly enough in WinNT/2k/XP the reported resolution from QueryPerformanceFrequency is about 3.6MHz, while in Win98/ME it wraps the RDTSC instruction so the resolution is the clockspeed of your CPU. Also the calls to QueryPerformanceCounter take roughly 2000 CPU cycles each in 2K/XP. I'm guessing they no longer use the CPU instruction in case you have a non-symmetric multi-proc system?