OT: C++ peformance timers.

[b]Ok from there i simply add this

PlayerCtr = PlayerCtr + Move*TimeStepAverage;

So if the number of frames this second was 120, the move is scalled by 1/2. Making the physics just as accurate as at 60, but allowing the frame rate to be independant at the same time.

Also if the frame rate dropped to 30, the move would be multiplied by 2, making it still accurate, and non frame based. Does this sound feasible?[/b]

No… The idea is correct, but the problem you will most certainly encounter is that the true “FPS” will sometimes vary greatly within a second (or whatever averaging interval you choose), which will result in very strange, jerky, movements.

You really need to change TimeStepAverage to TimeStepThisFrame, and then you actually have the “dt” method anyhow.

There are two ways to do this:

  1. Constant frame rate (locked with VSync and/or whatever clever timing loop) - compensate physics etc if you “loose” one or more frames (you’re not fast enough to meet the frame deadline) by “ticking” twice or more the next frame

  2. Variable frame rate - measure the time it took to render the last frame, and give it your best shot and guess that the next frame will take the same time to render

In my opinion, the latter method is the simpler one (e.g. it’s not possible to guarantee VSync on, nor easy to set monitor refresh rate to a suitible value based on the performance of the target system).

Note that even with method two, you may need to do several “ticks” for one frame, since if dt gets to large, physics and collision detection will become unstable.

Originally posted by marcus256:
<…>

  • Handle 64-bit wrap arounds
    They don’t matter if all you want is delta t. Unsigned subtraction eliminates wraparound … almost.

64bit is about 100 years @ 5GHz. If you manage to need a time delta larger than that, it’ll wrap around to zero. But I don’t really think it’ll hurt your animation much

(even one year @ 500GHz still seems good enough to me)

>> Handle 64-bit wrap arounds
> They don’t matter if all you want is delta t. Unsigned subtraction eliminates wraparound … almost.

Yes, I realized that too yesterday (havent thought much about it - just assumed “64 bits is plenty”). I think I’m on the safe side, but I may have to look through my code just to see that I don’t do any foolish things.

Would this work then?

Hang your display function off a timer which fires at e.g. 20ms to give you 50 FPS (or whatever to get the frame rate you need)

Set an idle function to compute your physics as required for the next frame. Set a flag to stop the idle loop re-calculating.

In the display func, draw the graphics, and then reset the physics flag to let the next idle loop calculate the next step.

btw: I did notice the () on clock(), just was pointing out that not all solutions are windows, and not all unices are the same either! Was being a little “humourous” … sorry if it wasn’t taken that way!

Rob.

I think i will just stay with my current method. All this talk of performance timing is making my head hurt. There realy should be an easier way, but i guess not. lol.

Current method. I have married the physics, collision detection, movement, graphics EVERYTHING togeather (seems logical anyhow).

I go though my scene, render loop physics loop, etc; I then get to the bottom of my loop and call a Timer function. Based on the gluttimerfunc. It simply tells me if it has been more than 16ms since my last frame. If not, it waits till then to fire the next loop. If so, it fires the next loop. This way I NEVER get out of since with my physics, everything moves at an exact speed. Only problem is, if a PC is TOO fast, it will spend alot of time waiting, but at 60fps, i dont much care about that. As long as it dosnt drop below 60, cause then the movement will get jerky.

I was thinking about also putting in a fall back mechanism. Basicaly, If i based all my moements on 60fps. And i did a timer check (using the c library timer, cause I only need second to second accuracy to check for how many frames per second). I would simply check every 5 seconds or so, what the average frame rate was. If the frame rate was TOO low, i would simply drop my frame set from 60 to 45, and so on. This way the picture may become a little choppy, but the physics and player movement wouldnt. Make sense?

Only to the developer :stuck_out_tongue:

I prefer using the previous frame’s time, because it doesn’t really matter at all if your framerates don’t fluxuate more than, oh, maybe four to eight percent of your FPS…? I dunno, that’s just a guess.

I don’t think it matters much. By virtue of double or triple buffering and hardware pipes you’ll be a bit behind anyway.

It is important that all time spent is measured and applied to physics. Which means that if your animation runs for ten minutes, you’d better let your physics code see ten minutes (with the notable exception of slow motion and other time base ‘effects’).

I don’t think that delay tactics or sparse time sampling can stand this test. Accuracy, and exactly one sample per frame, that’s what you need.

[This message has been edited by zeckensack (edited 10-31-2002).]

Originally posted by LostInTheWoods:
I think i will just stay with my current method. All this talk of performance timing is making my head hurt. There realy should be an easier way, but i guess not. lol.

In my opinion, the easiest answer IS the “performance timer” solution (frame-to-frame timing):

loop
{
  // Get time and delta time for this frame
  t = glfwGetTime();
  dt = t - t_old;
  t_old = t;

  // As suggested, do everything here
  UserInput();
  Physics( t, dt );
  Collision( t, dt );
  Draw();

  // We're done with this frame, swap buffers
  glfwSwapBuffers();
}
until( some_criterion );

In your physics engine, for istance, use dt for calculating propagation, rather than some fixed value or average frame time value (I also included ‘t’ in the call to the functions in the pseudo code, since some things may work better with absolute time rather than propagation).

This will give you the by far most accurate simulation, and it’s really simple. You don’t have to worry about lost frames or dropping framerates (you know, you will probably have completely different FPS for different scenes - just dropping to the lowest of these would probably give you 10 FPS on many machines).

Originally posted by jwatte:
The RDTSC instruction is portable to all user-mode x86 operating systems, assuming you can find a portable assembly syntax (I like NASM for that reason). Of course, then you’re faced with trying to figure out the CPU speed, which is, uh, “hard” on SpeedStep chips.

Hi jwatte,

Do you know of a solution for this SpeedStep problem? I was thinking of disabling RDTSC (and fall back to some lesser timer source) on CPUs with variable core frequency, and this is what I came up with:

  1. Transmeta apparantly has a constant clock for the TSC, even if the core clock is changing - Transmeta rocks!
  2. It is simple to detect AMD PowerNOW! using the CPUID instruction => disable RDTSC on those chips
  3. There is no way in h*ll to detect Intel SpeedStep (?!)

It may be possible to have a “chip detection list”, flagging “no RDTSC” for all Intel chips which are labelled “Mobile” (basically check bits 7-0 of EBX for CPUID 1 - 06,07,0e,0f should be Mobile).

Any better ideas?

What we ended up doing was running RDTSC, QueryPerformanceCounter() and timeGetTime() in parallel, and vote.

When RDTSC was out-voted by the other two, we took that to mean that the CPU speed had changed and updated our measurement.

Given the specific nature of our real-time simulation, we ended up using RDTSC for intra-frame timing (because it’s so cheap) and actually voting/updating/base-lining once per frame.