Thougths on Timing an OpenGL Application...

Hello Everybody!

What I was thinking about for several times in the past was how to accurately time an OpenGL Application.

The idea behind my thaught is to make the Application run at the same speed and smoothness on every system that has enough CPU power to do so.

When I try out my Apps on my friends system for example they run at quite a different speed (a lot faster in this particualr case) than on my pc.

My Question is: What is the best way to do what I want to?

I went through many of the available Tutorials (like one of nates excelent tutorials, nehe’s superb ones etc.)

Some of my own Ideas:

[SAVE HARDWARE TIMER TICKS]
[DRAW SCENE]
[WAIT UNTIL A SPECIAL COUNT OF TICKS HAVE OCCURED]

The problem of this one is that the Framerate is limited to a particular value.

The second Idea I had but was not able to implement so far is this one:
Pack all the Object movement in one particular function. Call this function in exact periods of time (lets say 100 ms)
and then Interpolate the movement of the objects in the period between the position changes.

Please tell me what you think about it. Maybe you already implemented it by yourself and can explain your approach.

I’m looking forward to your comments.
Martin

There is a pretty straightforward way to do what you want. Assign a velocity vector, V, to each moving object in your application. You will probably end up changing this value each frame based on user input or internal physics. Every frame, call a function that tells you how much time has passed since you last called it. Call this time T. Once per frame, move each object by an amount V*T (a distance vector) and draw it. This way, faster computers will draw more frames per second (smoother animation), but the behavior of your program will be the same on all computers.

have a look at this http://www.flipcode.com/forums/cotd/COTD-TimerClass.shtml

hope it helps

Do it like in real life. Have the position of an object be dependent on time. Here’s the formulas: (where P is position, T is time (any unit you want), A is acceleration V is velocity and V0 is starting speed)

P = VT; //no acceleration, change V independently

P = V0T + (AT^2)/2 //accelartion, V0 could be zero if it has no speed to begin with

T is time since the origin of the motion.
I do my movemnt by calculating delta-position, using P=VT, where T is the time that has elapsed since the last frame. Then I just move my object in its direction P units. V could be (unit/ms).

Point: make all your movement depend on time.

Yep, you don’t want to wait for all that time, and get no benefit from new hardware, just update based on measured delta time, perhaps with a low pass in there. If your simulation is trivial compared to draw time you might also want to update the animation several times each frame on machines with slower graphics.