Serializing texture generation from threads

Hi all,

I’m trying to create an application which reads frame data from a webcam or, other such capture device, and applies a series of filters on the image before displaying it to the user. The filters are meant to be used as black boxes which can be connected arbitrarily as an overall processing pipeline on the image data. I want be be able to use OpenGL in my filters in order to take advantage of hardware acceleration. The problem is that each filter runs in its own thread in order for the overall system to be responsive. I realize now that OpenGL calls must be constrained to a single thread, so to get around this limitation, I’ve created a sort of traffic cop which cues up OpenGL requests from other threads and serializes and executes them in the main thread. The effects of this, however, don’t seem to be very predictable. As a toy example, I want to try to change the brightness by modifying the red, green and blue scale values with glPixelTransfer, then copying the image data into a texture buffer, then copying it back into memory to obtain the result. My traffic cop ensures that this code gets invoked in the main thread along with all other calls to OpenGL routines, however, the code fails to create a text buffer. Here’s the main worker function:

image::BrightAdjStreamTmpl<E,P,M>::GlAdjustBrightness( void* params )
  // params is a void* to a BrightAdjParams structure which
  // contains all the parameters for the current function.
  BrightAdjParams* param_struct = reinterpret_cast<BrightAdjParams*>( params ) ;
  if ( param_struct ) {
    // param_struct->texture is a pointer to a memory location
    // which holds a GLuint that is used as a texture name. This
    // memory is owned by a different thread.
    if ( !glIsTexture( *param_struct->texture ) ) {
      glGenTextures( 1, param_struct->texture ) ;
      if ( glGetError() ) {
	*param_struct->texture = 0 ;
    GLuint texture_id = *( param_struct->texture ) ;
    if ( glIsTexture( texture_id ) ) {
      // Set the new scaling factors.
      // param_struct->scale is a float value in the range of
      // [-1.0, 1.0] which represents the actual brightness
      // scale value.
      glPixelTransferf( GL_RED_SCALE, rscale + param_struct->scale ) ;
      glPixelTransferf( GL_GREEN_SCALE, gscale + param_struct->scale ) ;
      glPixelTransferf( GL_BLUE_SCALE, bscale + param_struct->scale ) ;

This code is invoked by the calling thread by a call to a function which cues it to be executed in the main thread and blocks until the function is executed and completes:

GLApp::ExecuteGL( void ( *gl_func )( void* ), void* params )
  // Create a condition to wait on for gl_func to terminate.
  pthread_cond_t done ;
  pthread_cond_init( &done, NULL ) ;

  // Queue the request.
  OpenGLRequest req( OpenGLCall( gl_func, params ), &done ) ;
  pthread_mutex_lock( &_GL_Req_Queue_Mutex ) ;
  _GL_Req_Queue.push( req ) ;
  pthread_mutex_unlock( &_GL_Req_Queue_Mutex ) ;

  // Wait for the function to be executed to completion.
  pthread_mutex_lock( &_GL_Mutex ) ;
  pthread_cond_wait( &done, &_GL_Mutex ) ;
  pthread_mutex_unlock( &_GL_Mutex ) ;

Finally the main display loop calls the updateDisplay function which purges the cue then displays the result:

  // Execute the requests in the OpenGL request queue.
  pthread_mutex_lock( &_GL_Req_Queue_Mutex ) ;
  while ( !_GL_Req_Queue.empty() ) {
    OpenGLRequest req = _GL_Req_Queue.front() ;
    _GL_Req_Queue.pop() ;
    // req.first.first is a pointer to the function that was
    // passed as the first argument of ExecuteGL.
    // req.first.second is the void* to the parameter structure
    // that contains the function arguments.
    // req.second is the condition variable that the calling
    // thread is waiting on.
    req.first.first( req.first.second ) ;
    pthread_cond_signal( req.second ) ;
  pthread_mutex_unlock( &_GL_Req_Queue_Mutex ) ;

  // Now update the display.
  BltImageInTexture( getPipeline()->getNextImage(), _display_texture_id ) ;
  drawScene( _display_texture_id ) ;
  displayParameterStatus() ;
  // After all drawing primitives have completed, swap the front and back
  // buffers.
  SDL_GL_SwapBuffers() ;

The second glIsTexture in GLAdjustBrightness(void*) never succeeds even though the allocation of a texture name seems to have succeeded. Does anyone have any idea why this might be? The OpenGL function calls, including the texture related operations, succeed for the display part (i.e. after all queued OpenGL requests have been handled).

NOTE: I realise this is a lot of work to simply modify the brightness of a frame of video data, however, this is just an example. The filters that actually use this method will be complex enough to justify this overhead.

Thanks in advance and my apologies if the example is convoluted,

The glGenTextures just returns unused identifiers. It does not create the texture so, from the point of view of the glIsTexture function, those identifiers do not correspond to any texture.

Yikes how silly of me! If I bind the texture before calling glIsTexture, it works as it should. Thanks.