Frame rate management

Hi all!

I’m working on an experimental 3D animated data visualization app using OpenGL. I plan to implement two modes: an ‘interactive’ mode with a smooth real-time frame rate, and a ‘batch’ mode which renders to video files. I figure batch mode won’t require special concern for real-time rates, but I expect there’s a whole art to getting a stable frame rate out of wildly-varying per-frame computation.

So I threw together a function which requests the next window redraw from an FLTK one-shot timer callback. The timeout is calculated by comparing a medium-resolution OS timer to the next frame period. My off-the-cuff function hasn’t produced accuracy after messing around with it for an hour or so, so I figure it’s time to throw it out and implement a properly engineered solution. Although I can probably come up with something that does a good job, I’d like to learn about established practice first.

What’s this facility normally called anyway? I figure it’s got to be a component of most interactive animated programs, but search terms like “software frame rate governor” turn up zillions of unrelated hits.

How do I arrange the GL calls within a frame’s time window? Should I do the glDrawArrays as early as possible, then sleep, and then call glXSwapBuffers right when the frame should be displayed?

Should I use GLX_SGI_swap_control? What if the display is at 60Hz and my other clock source generates consistently 59 or 61 FPS? Can I get a callback after the vsync has occurred?

Do I need to keep timestamps going back a certain number of frames and smooth over them for stable timing?

Should I just get a book on game programming?

  • Experimental project for learning on, but I’d also like to get some real use out of it soon.
  • Developing to OpenGL 3.3 core ('cause it’s the future and all that).
  • Using FLTK for windowing, but not their GL window component.
  • Plain GLX, no GLUT.
  • Rendering does not yet take up appreciable time, but I intend to push it hard once I start bringing in real data.
  • Currently it’s single threaded but I have no objection to going multithreaded as part of a good soution.
  • I have the SuperBible 5th ed. but the index has no entries for ‘frame rate’ or even ‘double buffering’.

I’m experienced with developing time-sensitive code in C++. I’ve done a decent amount of graphics, but mostly non-interactive 2D. I’m relatively new to OpenGL, FLTK, and X programming.

Thanks in advance for any suggestions - hope I’m asking the right questions. :slight_smile:

  • Marsh

I think the tools you are missing right know are glFlush() and glFinish() for the rendering pipeline, and glXSwapIntervalSGI for the display sync.

Without these, the card can slurp a lot of rendering commands before actually starting any rendering, messing with any time measurement.

Simple brute force :


glXSwapIntervalSGI(1); // never render more than 1 frame for each display refresh
while (not_exit) {
   storeCurrentTimeA();
   animateDependingOnElapsedTime();
   glDrawArrays();
   glXSwapBuffers
   glFinish();
   storeElapsedTimeSinceA();
}

Pros :

  • simple
  • as your rendering takes negligible time (never under 60fps) you are guaranteed to have exactly 60fps on screen
  • you can even skip measuring time if you know the screen refresh rate (but you should do it for robustness anyway if your rendering goes below 60fps sometimes)
    Cons :
  • depending on the implementation of glFinish() (polling cough) it can eat a whole cpu core without vsync even if there is no other cpu usage in your rendering code. Good news, even if your legitimate cpu usage climbs, it will stay at 1 core. Try glFinish() if that is a problem for you, but then the GPU can pile up several frames (I heard up to 3) before actually blocking.

Start with this, then depending on your results there may be clever but more complex ways.

Thanks.

I had been going down that road, but was suspicious about the glFinish() call.

Since I’m currently single-threaded, all this is happening in the UI’s main event loop. I’m wary about calling a blocking function like glFinish() which doesn’t provide any guarantee about how long it might take to return.

Historically, UI stuff hasn’t always worked so hot with multithreading, but OpenGL APIs seem quite modern and mature these days. Is it reasonable to do the rendering from a background thread and make the blocking call from there?

Or perhaps I could insert a sync fence at the end of each frame and wait on that, which hopefully wouldn’t spin the CPU to death, or I could at least poll at idle. Is there an established way of finding out after when the vsync actually happened? Can I sync on a fence after the glXSwapBuffers, or does that not qualify for SYNC_GPU_COMMANDS_COMPLETE because it’s a GLX function?

It would seem odd if there’s not a name for a software timebase/rate synchro. I’ve provisionally named mine the Cyclomatic Governator, so I hope something else comes up!

Happy Thanksgiving. :slight_smile:

Since I’m currently single-threaded, all this is happening in the UI’s main event loop. I’m wary about calling a blocking function like glFinish() which doesn’t provide any guarantee about how long it might take to return.

It’s going to return when all previous OpenGL commands have completed. Unless you have rendered a lot of stuff, or are running on really antiquated hardware, it’ll take about one frame. So if you’re running at 30fps, it will take 1/30th of a second.

Nothing you should be concerned about.

OK, well I have what I think is a good solution in place now.

It uses a fence sync to notice after every frame has been rendered. It also takes note of the time at which swapBuffers returns and usually hits that target within a few microseconds. Short delays can be made up over the next few frames, but long skips force a resync. It turned out not to need glFinish or swapInterval.

I found my video card will, in fact, happily take several rendered frames into its pipeline.

Thanks again.