Sub_milli-second timing without CPU usage

formulav8

Diamond Member
Sep 18, 2000
7,004
523
126
I am trying to get a near EXACT 1 second pause/timout using the Performance Counters libraries. The problem is that it uses way to much cpu as coded below.

I know it is because of the loop and i've tried various ways to drop usage and no go.


Do

System.Windows.Forms.Application.DoEvents()

QueryPerformanceCounter(e)

result = (e - s)

Loop Until result >= freq


I tried Doevents and even used a 1 ms Sleep inside the loop. While it did drop cpu usage, the timing was no longer accurate enough since the Sleep api does not support sub-millisecond resolution. Using sleep for the 1 second timeout is not a high enough resolution. I've tried low thread priority and such as well.

Does someone know how I can use the performance counters to make almost EXACTLY a 1 second pause/timeout without high CPU usage??


Thanks for any info. :)


Jason
 

EagleKeeper

Discussion Club Moderator<br>Elite Member
Staff member
Oct 30, 2000
42,589
5
0
Assuming that you are using Windows

1) The Timer function for Windows will register at 1 ms resolution.
Register != execute.
Setting the priority or the application and timer thread will help bring this closer to realtime.

2) In NT 4.0 there was a high performance modification to the kernel to allow sub millisecond resolution.
I do not know if this has been implimented in XP or Vista.


disclaimer:
I am not aware of what the Performance Counters are/do.

However, I would expect that you are taking a snapshot of something and want to either do this on a millsecond or 1 second resolution.
The timer handler on a realtime priority setting should get you within one or two millisecond accuracy.
 

formulav8

Diamond Member
Sep 18, 2000
7,004
523
126
Hi, the Timer function I believe is the same resolution as the Sleep api and timeGetTime API which is 1 ms.

I already have accuracy and resolution. The problem is the CPU Usage when using the codes. I need some way of using the PerformanceCounters API to give me as close to perfect 1 second as possible.

In case your wondering,this is the link to the API this is the APIs I am using.

Whether I use tickcount or timeGetTime in the Loop, it pegs the CPU usage because of the loop. I do not know of any other way to calculate the 1 second interval without the loop??

Jason
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
There is a high-performance extension to the WMI interfaces that you are using, but I don't know if it will really get you what you need. Problem is that in a preemptive multitasking system like Windows getting accurate sub-ms timings is dependent on all the other things the machine is doing.

Here is an overview of the high performance provider. If you aren't already using it, might be worth a shot.

http://msdn2.microsoft.com/en-us/library/aa384740.aspx
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
Profile your code. Are you simply calling QueryPerformanceCounter 1,000,000,000 times a second or is DoEvents eating all your CPU time?

Any time you call an OS function you are performing an 'int 0x2e' and context change from user mode to kernel mode and invoking all the usual costs of thread scheduling and context changes. Doing that in a loop as fast as you can is not advised. A 1ms sleep is the same thing, you're just throwing your thread into the back of the scheduler every iteration.

Your best bet is to use some kind of high resolution high priority timer interrupt based callback. Set that callback to occur at maybe 900ms seconds (or even sleep for 900ms), then use a more precise method to to fine tune it down to 1000ms exactly. That should use zero CPU because your thread will not even be executed for 90%+ of your wait period. The goal in eliminating CPU usage should be to remove your thread from the scheduler's ready list entirely, while you are waiting. The closer you get your thread to wake up to 1000 ms without going over, and the less iterations you spend actively padding it out to 1000ms with more precise methods, the lower the CPU use will be.

Ideally you want to tune your program to get the thread to wake up within 5 ms of your target timing value so that you can resolve it to 1000 ms and still have time to call the events in that thread quantum (CPU time slice, lasts about 10 ms on Windows IIRC) without losing the CPU to the next thread and having the 1000 ms pass while another thread is running.

Or just use a OS designed exactly for this kind of real time priority like VxWorks if the project allows. It's going to be extremely hard if not impossible to get precise and predictable event timing on a multi tasking GUI driven non-real time OS like Windows. All you can do is get acceptably close and live with it.

Another thing you can do is accumulate the error if the 1 second events are not real time sensitive. For example say every iteration is 1005 ms, every 200 seconds you will have accumulated 1 second of error and you would call your timed event twice. Same goes for subtracting time when the timer is early. It wouldn't be suitable for something real time if those events have to be exactly spaced 1 second apart, but it would guarantee x number of events in x seconds even after numerous days have passed.


Also you are using VB so there may be VM overhead adding to timer errors (the time the VM gets the result from the OS to the time the code in your program evaluating that timer gets interpreted by the VM). A language like Java or VB could literally have thousands of machine cycles pass from one line of program source to the next as the VM cycles through it's main loop while parsing each statement. A language like C on the other hand each set of direct real CPU instructions run on the order of nanoseconds and the functions evaluating the timer conditions are likely to execute much closer to the actual time elapsed by the timer.

As you can see, there are lots of reasons you can expect to never get perfect 1 second timing. What if you nail it at 1000 ms exactly and call your 1 s event handler, only to have that handler suspended by the scheduler at 1005 ms due to it's thread's time slice expiring, then have it resume at 1290 ms? Windows isn't the ideal environment for perfect timing.

Anyhow, if what you are doing works for your needs but is just using too much CPU time, try to sleep for 990 ms then burning 10 ms with a polling loop like you posted. Burn as much time as you can not running at all.

You may want to put your forms event handler in it's own thread as well. There is no need to call it that much, and I assume you only put that in there for app responsiveness inside your timing loop so that the app doesn't freeze in 1 second intervals. Have your DoEvents thread use a non blocking message queue polling function that goes to sleep and wakes when a new message arrives so that you are only calling DoEvents when there are messages to process.
 

Noobsa44

Member
Jun 7, 2005
65
0
0
I'm not sure that this is your problem, but this is a problem I have run into in the past. DoEvents uses up a lot of processing power and when run over and over again, can eat up CPU. One thing that can help improve your CPU usage is to check if you need to DoEvents before doing events. You may want to look at this code for an example of how to check if any events have occurred before executing the DoEvents command.

Also, now that I think about it, there is a stopwatch class inside of .net which I know uses high resolution timing. I don't know if it has a alarmclock like feature, but that may be worth looking into.
 

homercles337

Diamond Member
Dec 29, 2004
6,340
3
71
I have used this before.

The header...
#pragma once

class CDuration
{
protected:
LARGE_INTEGER m_liStart;
LARGE_INTEGER m_liStop;

LONGLONG m_llFrequency;
LONGLONG m_llCorrection;

public:
CDuration(void);

void Start(void);
void Stop(void);
double GetDuration(void) const;
};

inline CDuration::CDuration(void)
{
LARGE_INTEGER liFrequency;

QueryPerformanceFrequency(&liFrequency);
m_llFrequency = liFrequency.QuadPart;

// Calibration
Start();
Stop();

m_llCorrection = m_liStop.QuadPart-m_liStart.QuadPart;
}

inline void CDuration::Start(void)
{
// Ensure we will not be interrupted by any other thread for a while
Sleep(0);
QueryPerformanceCounter(&m_liStart);
}

inline void CDuration::Stop(void)
{
QueryPerformanceCounter(&m_liStop);
}

inline double CDuration::GetDuration(void) const
{
return (double)(m_liStop.QuadPart-m_liStart.QuadPart-m_llCorrection)*1000000.0 / m_llFrequency;
}

The function call...
void waitTime ( int msec ){
clock_t start;
start = clock();
while( (clock()-start) / (CLOCKS_PER_SEC / (double) 1000.0) < msec );
}
 

formulav8

Diamond Member
Sep 18, 2000
7,004
523
126
Thanks ALOT for all of the info!

Mark: I did look at WMI before but I couldn't find anything to really help me out to make it any better.

exDeath: I did figure out that it is the actual looping that is causing the high cpu usage. I likewise tried with DoEvents, without Doevents, and using GetIntputState before calling Doevents but still no go.
I can already get sub-millisecond timing using the performance counter. The problem is using it to count 1 second using a loop that keeps going until the 1 second has been calculated.
I also tried using built-in Windows Timers. I used SetTimer/KillTimer and while it didn't have any cpu usage, it still didn't have high enough resolution. It was about the same resolution as the Sleep API and timeGetTime timer.
I also tried running in its own thread but still high cpu usage. I even tried setting it to the lowest priority and it still wanted to pegg the cpu.
I guess I will have to take your suggestion and calculate the percentage difference between the timer and apply the difference to the actual results.

Noobsa: Thanks for the info. I did try that before and still no go. I actually created that example as well :)

homercles: I unfortunately am not to great as C coding. I can sometimes decode it to VB, but not run it as C code. Maybe I can try creating a c .dll and export that timeout function and use it in vb? Hmm, might be worth checking out. Thanks for the info!


Again, I really do appreciate all of the info. I now have a couple options I can try and see where that gets me :)


Jason
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Ironically, I'm using QPC for an elapsed time function on of all things a synchronous disk copy operation that includes compressing the file stream using zlib on the fly. For every block of data "transferred" (the block-size is user-definable from 1 byte to however large the memory manager lets you make it without barfing), a callback is made into a "progress" function that queries the performance counter, does some 64-bit math, calculates the elapsed time, etc...

Depending on the device that is being copied to/from, I'm never seeing more than about 25% CPU usage from my app - and that would be while copying from a USB memory stick or something that has similar system-CPU overhead inherently attached. From a hard drive I'm seeing something normally around 13% CPU usage. I most certainly am not doing anything special here.

My question for you - is DoEvents checking the message queue or something? If so, you probably shouldn't call it quite so often. That's probably eating a lot of overhead right there.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
exDeath: I did figure out that it is the actual looping that is causing the high cpu usage. I likewise tried with DoEvents, without Doevents, and using GetIntputState before calling Doevents but still no go.

How about calling yield() in the loop? That will give the proc some time to schedule other threads.