I have done some OpenGL code and I put it under WM_TIMER message to achieve maximum performance. I set up timer to refresh evry 0.001 seconds. It seams that this is not a good idea. Please tell me how can I get maximum performance with my code.
I’m allso interesting how can I sinchronise refresh rates in my code eq. I need 30fps, but my code runs about 100fps.

Hi Davor!

Well, the WM_Timer is known to be inaccurate as hell (don’t expect that it is precise, read the doc’s and tech articles about it)

If you want an precise timeing, you should use a multimedia timer. There is a good
example in the docs, but if you can’t figure
it out, I can give you some code. But first, you should try it out yourself, only this way you can learn and understand (no offense intended).

Btw.: 30 FPS, means (about) 33ms delay (but take your refreshrate into account)

Kind regards,


(Even though, it was not an OpenGL related question)

Thanks to your replay!
OK. I’m now at the work so I can’t study multimedia timer at the moment.
How can I set up mm timer to my code.
Does there exist VM_MULTIMEDIA_TIMER message or what. Can you give me simple explaination about how to use multimedia timer.
Best regards
Davor Grguric

You want the timeGetTime() function; IIRC you’ll need to #include “mmsystem.h” and link with winmm.lib

There’s a function called QueryPerformanceCounter() which is even higher precision (it’s designed to be used for profiling) but it’s more of a pain to use, isn’t guaranteed to be supported on all systems and is probably overkill in this case.

Hi Davor!

No, multimedia timers are not message driven, there are actually interrupt driven.

This means you need to provide a static function that is called each time the event got triggered (kind of a callback function)

Well, you will find pretty good documentation and samples within the VisualC++ online help.

MikeC: I think using the PerformanceCounters is a bit “overloaded” in this case

Best regards,


timeGetTime is probably the easyest way to do what you want. It’s simple, in fact, it’s simpler than WM_TIMER, and it works better.

Well, thanks a lott guys. I will try this out.
Davor G.

Another solution, which takes a bit more work, but is more accurate (in a certain sense) is to allow the system to just run as fast as it can ( not rely on a timer to to tell the system when to start drawing the next frame )…Then you have the objects movement in the system be scaled by the frametime. That way, if you have a temporary framerate drop, for whatever reason, the objects will always be in the right place at the right time. But even more than that, if the end user has a killer machine (PIII 800 with a GeForce2), the framerate isn’t limited by some cheesy timer and so the user actually gets a benefit out of having a fast system. I know, sort of a simplified description of how this all works, but this is typically how Quake III handles moving objects, etc.