C code to get time in milliseconds

AznMaverick

Platinum Member
Apr 4, 2001
2,776
0
0
i'm trying to code on a UNIX based system, but i can't seem to get the computer to output time precisely enough so that i can see how long it's taking for me to execute a function. Can this be done?
 

EagleKeeper

Discussion Club Moderator<br>Elite Member
Staff member
Oct 30, 2000
42,589
5
0
If the function can be made to be stand-alone, create a test driver to call the function in a loop x times and measure the total time. divide by the loops and you will have a close estimate.
 

EagleKeeper

Discussion Club Moderator<br>Elite Member
Staff member
Oct 30, 2000
42,589
5
0
What is the acuracy of the gettimeofday?

If the function is small, the delta may not show up.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Probably depends on the OS and hardware, but a struct timeval has microsecond precision. In my experience it's accurate enough for pretty much anything (on x86/linux)
 

Armitage

Banned
Feb 23, 2001
8,086
0
0
IME, don't try to do anything serious with that "microsecond" precision on x86. I was involved in a project that needed that sort of thing, and the PC clock just isn't good enough ... they eventually has to get a lab-grade frequency standard to make it work. I'd trust it to 1/10 or amybe 1/100 second.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Hmm..

#include <sys/time.h>
#include <time.h>
#include <stdio.h>

int main(void) {
struct timeval foo;
while(!gettimeofday(&foo, NULL))
printf("%d\n", foo.tv_usec);
return 0;
}

I get output like this:

319429
319433
319437
319441
319445
319449
319453
319457

4 microseconds.. I wonder how long it takes to call gettimeofday() and then call printf? My sense of scale kind of sucks at the microsecond level. :p
 

Armitage

Banned
Feb 23, 2001
8,086
0
0
I wasn't involved in the technical aspects of the project I mentioned, but IIRC, the problem was more one of longer term drift. They would run the analysis in the morning, and get one result, then run it in the afternoon, or the next day and get something else. If they ran it back-to-back, it was pretty consistant. They eventually traced it back to drift in the clock ... 1 millisecond yesterday isn't quite the same as 1 millisecond today. It wasn't big enough that you's see it in your 4 microsecond loop. More like a difference of a few microsecond over a few seconds total time.

In any case, using an external frequency standard fixed the problem. This was a very specialized piece of work ... not likely that what Maverick has in mind will be affected, but it's good to have a feeling where the limits are.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Ah, interesting. I can see that happening; I actually have my machines sync their time via ntp daily, because otherwise they'll drift a bit over time.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
The time you get reading the system clock jumps in increments ranging from 10ms to 56ms (on PCs) depending on how your particular OS programs the clock interrupt.

If you are using an x86 chip, your best bet is to use the RDTSC instruction. Google that and you will find some inline assembly to make a function to call to read it. This gives you the # of CPU cycles the machine has been running as a 64-bit integer on everything since the original Pentium, so just divide by your clock speed to convert to seconds.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
In most OSes, even though the units APIs like SystemTime or gettimeofday use contain ms or µs, most values will never really be returned because the clock isn't actually updated that frequently. In fact the interrupt used to tell the OS to go update the current time is the same interrupt used to determine when to context switch between processes, so if it was occuring every microsecond everything would be pretty slow from all the context switch overhead and cache thrashing.

Instead of the output you showed above, what is more likely is something like this:

015125
015125
015125
...
015125
015125
030250
030250
030250
030250
030250
030250
....


The timer used for the PC realtime clock is 4.77 MHz (yeah the old 4.77 MHz clock generator from the original PC XT is still there). Every 4th tick of this clock increments a 16-bit counter, which gives a maximum time of about 55ms between rollovers. In DOS it is actually allowed to go through this full cycle of all 65536 values between rollovers. Every time it hits 0 this generates an interrupt to increment the system clock. In Windows and Linux the interrupt handler sets the value up around 50000 so that the interrupts will occur more frequently for better clock resolution (It's every 20ms in Win9x and NT, 15.125ms in Win2K/XP Server, 10ms in XP Pro, don't know how fast they set it in current Linux versions).

The fact your gettimeofday calls are giving you better than 10ms resolution suggests the call is smart enough to look at the current value on the counter being incremented, not just the system time value updated on interrupts. This surprises me a lot. Is there an optional flag on how that call works when installing Linux?
 

Ynog

Golden Member
Oct 9, 2002
1,782
1
0
Ahh the problem of timers.

First thing to say, you said Unix, but I don't actually know what you are running under the hood, but
you cannot expect real time behavior from a timer, when your aren't on a real time OS.
So your OS can cause problems.

That said for your problem gettimeofday on unix non-realtime systems should give you
the resolution needed to time the length of the function.

i have read that depending on your OS gettimeofday can be accurate in microseconds.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Originally posted by: glugglug

The fact your gettimeofday calls are giving you better than 10ms resolution suggests the call is smart enough to look at the current value on the counter being incremented, not just the system time value updated on interrupts. This surprises me a lot. Is there an optional flag on how that call works when installing Linux?

No. Here's do_gettimeofday() from linux 2.6.1 (hit quote to see proper indentation):

/*
* This version of gettimeofday has microsecond resolution
* and better than microsecond precision on fast x86 machines with TSC.
*/
void do_gettimeofday(struct timeval *tv)
{
unsigned long seq;
unsigned long usec, sec;
unsigned long max_ntp_tick;

do {
unsigned long lost;

seq = read_seqbegin(&xtime_lock);

usec = cur_timer->get_offset();
lost = jiffies - wall_jiffies;

/*
* If time_adjust is negative then NTP is slowing the clock
* so make sure not to go into next possible interval.
* Better to lose some accuracy than have time go backwards..
*/
if (unlikely(time_adjust < 0)) {
max_ntp_tick = (USEC_PER_SEC / HZ) - tickadj;
usec = min(usec, max_ntp_tick);

if (lost)
usec += lost * max_ntp_tick;
}
else if (unlikely(lost))
usec += lost * (USEC_PER_SEC / HZ);

sec = xtime.tv_sec;
usec += (xtime.tv_nsec / 1000);
} while (read_seqretry(&xtime_lock, seq));

while (usec >= 1000000) {
usec -= 1000000;
sec++;
}

tv->tv_sec = sec;
tv->tv_usec = usec;
}

I assume that the userland gettimeofday() simply calls this do_gettimeofday(), but I could be wrong. :p
 

stephbu

Senior member
Jan 1, 2004
249
0
0
Originally posted by: glugglug
The time you get reading the system clock jumps in increments ranging from 10ms to 56ms (on PCs) depending on how your particular OS programs the clock interrupt.

If you are using an x86 chip, your best bet is to use the RDTSC instruction. Google that and you will find some inline assembly to make a function to call to read it. This gives you the # of CPU cycles the machine has been running as a 64-bit integer on everything since the original Pentium, so just divide by your clock speed to convert to seconds.

In Win32 the CPU high-resolution timers attributes are also exposed in the NT Performance Counter API set. QueryPerformanceCounter() and QueryPerformanceFrequency() API return the appropriate data.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: stephbu
In Win32 the CPU high-resolution timers attributes are also exposed in the NT Performance Counter API set. QueryPerformanceCounter() and QueryPerformanceFrequency() API return the appropriate data.

I wonder what timer they are actually using here. Oddly enough in WinNT/2k/XP the reported resolution from QueryPerformanceFrequency is about 3.6MHz, while in Win98/ME it wraps the RDTSC instruction so the resolution is the clockspeed of your CPU. Also the calls to QueryPerformanceCounter take roughly 2000 CPU cycles each in 2K/XP. I'm guessing they no longer use the CPU instruction in case you have a non-symmetric multi-proc system?
 

stephbu

Senior member
Jan 1, 2004
249
0
0
wonder what timer they are actually using here. Oddly enough in WinNT/2k/XP the reported resolution from QueryPerformanceFrequency is about 3.6MHz, while in Win98/ME it wraps the RDTSC instruction so the resolution is the clockspeed of your CPU. Also the calls to QueryPerformanceCounter take roughly 2000 CPU cycles each in 2K/XP. I'm guessing they no longer use the CPU instruction in case you have a non-symmetric multi-proc system?

*Nods* the multiproc aspect certainly potentially skews readings - I think it may also vary between HAL implementations. I'll have a fish around to see where the vanilla HALs takes their data from - it was discussed at one point.

The resolution is generally at-least microseconds, the cost of calling it is pretty high too (I assume there is a kernel mode call somewhere in there)