Displaying sequence of frames in loop on projector

Dear all,

I have written OpenGL code in combination with the SOIL toolbox to load a series of bitmaps onto the GPU and to display them in a loop, one after the other. The speed at which this happens can be controlled more or less using Sleep() in the idle function, but for now it is being displayed at maximum refresh rate by setting the Sleep to 0.001ms.

The next step for my application would be to output this series of bitmaps (in full screen) on a projector with a refresh rate of 60Hz, making sure that with every refresh (i.e. every newly projected screen) of the projector, a new bitmap is imaged. And this in a let’s say infinite loop.

What would be the best way to do this? Can I use OpenGL to incorporate some sort of control so that there are no frame-drops/doubles in the projection? Or am I barking up the wrong tree here?

Thanks in advance for any comments,


You should never ever ever use Sleep calls to control framerate. Sleep only guarantees a minimum time to sleep for, and it may actually sleep for any arbitrary amount of time longer. The time specified to sleep for doesn’t include the time taken to draw a frame, so it’s totally unreliable. The common naive implementation looks something like this:

while (1)
    DrawFrame ();

    // run at 60fps
    Sleep (16);

That’s not going to run at 60fps.

First of all, your Sleep (16) call may actually sleep for 16ms. Or for 17ms. Or for 10,000ms. You don’t have control over this.

Secondly, how long does the frame take to draw? Does it take less that 0.666ms? Does it take 1ms? 2ms? Will a transient condition on the PC cause occasional frames to take 4ms? You can’t predict this in advance.

Thirdly, Sleep is commonly based on a system timer that has absolutely lousy resolution. The default resolution of Sleep is in the order of ~15ms. So a Sleep (16) call may actually Sleep for no less than 30ms! There are of course API calls to control this (e.g timeBeginPeriod on Windows) but unless you explicitly use them you’re stuck with that lousy resolution.

So your frame time isn’t 16ms here. It’s 16ms plus whatever arbitrary extra amount of time the OS decides you’re going to sleep for, rounded up to the next interval of your sleep timer resolution, plus whatever unpredictable amount of time it takes to draw the frame. It should be obvious - Sleep calls are totally useless for controlling framerate.

Where Sleep is OK is if you want to use it for reducing power usage (there are other ways of doing this however), but it’s not OK for controlling framerate, and it’s important to understand upfront that these are not two different classes of the same thing.

In order to display at a fixed refresh rate of 60Hz you should use vsync. This can be controlled through OpenGL (e.g via the WGL_EXT_swap_control extension on Windows) and will guarantee that you’ll never run faster than the refresh rate of your display device.

You can still run slower though, but again that’s something you have little control over. E.g. it may take some time to load a bitmap from disk, it may take some time to transfer it to the GPU, it may take some time to draw it on your backbuffer, and any one of those may be affected by transient conditions on your PC (maybe the OS decides to swap out some inactive memory at the same time as you’re reading a bitmap from disk, thereby affecting overall disk IO performance, for example) and may bump you to below 60fps. So you’re not going to be able to guarantee that you’ll never miss a frame, but you can adapt to things a bit better.

A rough way of doing this might look something like:

[li]Figure how much time has elapsed (in seconds) since the video started (using a high resolution timer). [/li][li]Multiply this by 60 to get the frame number to show. [/li][li]If this hasn’t changed since the last frame, then don’t bother with the rest (you may be able to get away with a Sleep (1) here if you want to reduce CPU usage and if you use the appropriate API to control the resolution of the timer Sleep calls are based on). [/li][li]Otherwise, convert this frame number to the name of a bitmap file (I’m assuming that you’ve got some convention that will enable you to do this quickly and easily). [/li][li]Load and display the bitmap. [/li][li]Swap buffers. [/li][/ul]

The only place where a Sleep call may be appropriate here is where I’ve indicated above; there should not be any other Sleep call anywhere in your main loop. Using this approach, instead of one Sleep (16) call you’ll just a bunch of consecutive Sleep (1) calls, followed by a frame, followed by another bunch of consecutive Sleep (1) calls, and that will behave much better.

Thanks for this great reply.

I am not fixed on using the Sleep() function in any way, it was just the only thing I could come up with yet to control the frame rate. I will have a look at Vsync now, and see if I can use it to make my application work.

One additional question, though: could I not load the bitmaps in advance to the GPU. I need to loop over a sequence of let’s say 32 frames that will not change. When frame #32 is displayed, I need to display frame #1 again, and so on… I thought by loading them to texture[1], texture[2], …, texture[32] before any displaying that this could save me time and thereby benefit the “one frame every 1/60th of a second”-goal. Or is this not how it works?

Simple code for now:

#include "stdafx.h"
#include "cv.h"
#include <conio.h>
#include <highgui.h>
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <Windows.h>
#include <tchar.h>
#include <stdio.h>
#include <stdlib.h>
#include <dirent.h>
#include <string>
#include <io.h>
#include <iostream>
#include <gl\gl.h>			// Header File For The OpenGL64 Library
#include <gl\glu.h>			// Header File For The GLu64 Library
#include <glut.h>
#include "SOIL.h"
#include <math.h> 

using namespace cv;
using namespace std;

GLuint texture[1];			// Storage For One Texture ( NEW )

int k = 0;
int l = 0;
int n = 0;

const unsigned int TARGET_FPS = 60;
const double TIME_PER_FRAME = 1000.0 / TARGET_FPS;
int g_start_time;
int g_current_frame_number;
void myInit(void); 
void display(void); 
void myIdle();

void myIdle()                 // ! Will not use the sleep() function anymore in next version

  glutPostRedisplay();        // display function is recalled on idle 
  double end_frame_time, end_rendering_time, waste_time;
    // event handling is done elsewhere
    // draw current frame
    // wait until it is time to draw the current frame
    end_frame_time = g_start_time + (g_current_frame_number + 1) * TIME_PER_FRAME;
    end_rendering_time = glutGet(GLUT_ELAPSED_TIME);
    waste_time = end_frame_time - end_rendering_time;
    if (waste_time > 0.0)
    Sleep(waste_time );    // sleep parameter should be in seconds
    // update frame number
    g_current_frame_number = g_current_frame_number + 1;
int DetectNrImages(){
	char path[100];
	GetCurrentDirectory(100, path);
	printf("path : %s", path);
	int rows = 0;
	int cols = 0;

	DIR* directory = opendir(path);
	dirent *entry = readdir(directory);     


Detecting .png files...
	if (directory != NULL){
		Mat test = imread("img0.bmp");
		cvtColor(test, test, CV_RGB2GRAY);
		int rows = test.rows;
		int cols = test.cols;
		int npix = rows * cols;
Image size =  %i x %i",cols,rows);

		while(entry = readdir (directory)){

			std::string fname = entry->d_name;

			if (fname.find(".bmp") != string::npos) {
Filename: %s", fname);
		perror ("Couldn't open the directory");}

	return n;

int LoadGLTextures(int n)// Load images And Convert To Textures
	for(int j = 0; j<n;j++){

		char integer_string[32]; 
		char *fname1 = "img";

		sprintf(integer_string, "%d", j);
		char *fname3 = ".bmp";
		char result[100];   // array to hold the result.

		strcpy(result,fname1); // copy string one into the result.
		strcat(result,integer_string); // append string two to the result.	
		strcat(result,fname3); // append string three to the result.	
		/* load an image file directly as a new OpenGL texture */
		texture[j] = SOIL_load_OGL_texture
		if(texture[j] == 0){
 probleem met texture nummer %i", j);
			return false;}
%s loaded",result);

		// Typical Texture Generation Using Data From The Bitmap
		glBindTexture(GL_TEXTURE_2D, texture[j]);
		//glTexImage2D(GL_TEXTURE_2D, 0, 3, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, texture[j]);

	return true;										// Return Success

int InitGL(int n)										// All Setup For OpenGL Goes Here
	g_start_time = glutGet(GLUT_ELAPSED_TIME);
	g_current_frame_number = 0;
	if (!LoadGLTextures(n))								// Jump To Texture Loading Routine ( NEW )
	{   printf("
 o oh");
		return FALSE;									// If Texture Didn't Load Return FALSE

	glEnable(GL_TEXTURE_2D);							// Enable Texture Mapping ( NEW )
	//glShadeModel(GL_SMOOTH);							// Enable Smooth Shading
	//glClearColor(0.0f, 0.0f, 0.0f, 0.5f);				// Black Background
	//glClearDepth(1.0f);									// Depth Buffer Setup
	//glEnable(GL_DEPTH_TEST);							// Enables Depth Testing
	//glDepthFunc(GL_LEQUAL);								// The Type Of Depth Testing To Do
	//glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);	// Really Nice Perspective Calculations
	return TRUE;										// Initialization Went OK

void display() 
	k = k%(n);
    //glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);	// Clear The Screen And The Depth Buffer
	//glLoadIdentity();									// Reset The View

	glBindTexture(GL_TEXTURE_2D, texture[k]);

	// Front Face
	glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
	glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
	glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
	glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);

void main(int argc, char **argv) 
   glutInit(&argc, argv); 
   glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); 
   glutInitWindowSize(640, 480); 
   glutCreateWindow("Texture Map Demo"); 
   int n = DetectNrImages();

%i images found", n);