Problems usin glTexSubImage2D and window resize...

There’s something fundamental I’m not doing right, because I’m having this problem in GLUT, OpenGL, and OpenCL. The basic idea is I’m using OpenGL for 2D raster graphics by rendering a quad with a texture. I use glTexSubImage2D to update the texture during Idle, and then during redraw I draw the quad with texture coordinates.

Everything works fine unless I combine updating the texture with Window resize. (I can resize the window with no updates, or see my updates without resizing the window.) When I try both, as soon as a resize occurs, the texture rapidly fragments into rectangles and garbage throughout memory, as if the texture is lost. (The artifacts are not simple ones as you’d expect with incorrect pitch, etc.)

I’ll describe what I’m doing in GLUT, since it’s the easiest, and I’m hoping someone can catch an error. This is in Mac OS-X Snow Leopard 10.6.3 / Xcode. I am VERY GRATEFUL for any help, as this has help me up for almost 2 years with no solution.

(1) Initial setup before glutMainLoop(); I make a buffer for the texture. It’s not a power of 2 size, but I’ve tried power of 2 sizes and got the same result:

unsigned char* pucMyBuffer = malloc(w * h * 4);

  1. Then I set up the openGL context for it as so:
    // Create a new texture!
    glGenTextures(1, &gTex1ID);

    glBindTexture(GL_TEXTURE_2D,gTex1ID);
    glEnable( GL_TEXTURE_2D ); // Allow texturing!

    // This is needed!
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
    w,h, 0, GL_RGBA, GL_UNSIGNED_BYTE, pucMyBuffer);

    // Try to get Alpha to work:
    glEnable (GL_BLEND);
    glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

  2. After entering glutMainLoop, I draw the window like this. Currently w and h are both 1.0 -> it depends whether I want
    to always fill the window with the texture or make the texture
    per pixel accurate:

    glBindTexture(GL_TEXTURE_2D,gTex1ID);

    glColor4f(1.0, 1.0, 1.0, fGlobalAlpha);
    glBegin(GL_QUADS);
    glTexCoord2f(0, 0); glVertex2f(0, 0);
    glTexCoord2f(1,0); glVertex2f(w, 0);
    glTexCoord2f(1,1); glVertex2f(w, h);
    glTexCoord2f(0, 1); glVertex2f(0, h);
    glEnd();

    glutSwapBuffers();

  3. I handle a resize like this. Again, the option depends
    on whether I want constant pixels or view

//glViewport( 0, 0, w, h );
glViewport( 0, 0, 1920, 1200 );
glLoadIdentity();
gluOrtho2D(0,1,0,1);

glutPostRedisplay();
glutPostWindowRedisplay(gMainWindowID);

NOTE THAT EVERYTHING SO FAR WORKS PERFECTLY!

  1. And on the Idle call, I do this to keep the texture changing:
    (Note that the hard coded sizes were just part of my attempts to debug what was going on)

    // Do a loaded update:
    glTexSubImage2D(GL_TEXTURE_2D, 0, // MIP level
    0,0, // SrcX, SrcY
    1920,1200, // W,H
    GL_RGBA, GL_UNSIGNED_BYTE, pucMyBuffer);

    glutPostRedisplay();
    glutPostWindowRedisplay(gMainWindowID);

This last step works as long as I don’t resize, but it I resize after changing even 1 byte of the buffer, the display goes haywire.

Any ideas are appreciated more than I can say. I am close to having to abandon the use of OpenGl after over a year of trying because I can find no programmer that knows anything about it.

Thanks,

  • Jeff

I’ll just add that my problem is that almost all OpenGL examples I’ve found make use of static textures and/or full screen mode. If anyone has a pointer to actual sample code using OpenGL for 2D display’s, I’d be grateful. I’m also wondering - if glTexSubImage2D is somehow screwed up with window resizing, I wonder if I could just attempt to load up and attach a new texture with every window resize?

I’d definitely create a new texture for each resize, but I’m not certain if your texture size is supposed to be the same dimensions as your window or not. Your code seems to assume that it isn’t in one place but is in another, which I guess is the problem you have.

You’re also using a non-power-of-2 texture which might cause problems on some hardware, and - depending on your window size - might even exceed the max texture size allowed by your hardware.

If your pucMyBuffer stays a constant size, eg: 1920 x 1200, and then you are using


glTexSubImage2D(GL_TEXTURE_2D, 0, // MIP level
0,0, // SrcX, SrcY
W, H, 
GL_RGBA, GL_UNSIGNED_BYTE, pucMyBuffer);

where W <> 1920, you will be reading the data incorrectly from pucMyBuffer.
glTexSubImage2D doesn’t know that pucMyBuffer is a 2D array, it just treats it as a single block of memory, so it will load each row of the texture directly after the previous row.

If you want a fixed size pucMyBuffer, you will need to inform OpenGL that it should skip the remaining unused bytes in pucMyBuffer:

glPixelStorei(GL_UNPACK_ROW_LENGTH, 1920);

then if you try loading a smaller image from pucMyBuffer with glTexSubImage2D, OpenGL will ignore the unused bytes at the end of each row.
http://www.opengl.org/sdk/docs/man/xhtml/glPixelStore.xml

Guys, thanks so much for your ideas. I’ll mess around some with glPixelStore and see if it helps. Honestly, though, despite this being aloppy example code, the width was always 1920 in my example. And the result I got was hard to explain -> massive flickering of white noise, etc. Maybe it needs to be a power of 2 texture (although I have tried that before).

Another thing I noticed is that while I never got correct behavior, the resulting incorrect behavior depended very strongly on when/where I did the update in the overall scheme of things.

I’m just curious -> can anyone on this list recommend a place where I could commission a programmer to write such a sample program? I’ve really lost all hope in being able to get this to work.

Thanks,

  • Jeff

Guys, here’s what I don’t understand about all this. Why should the texture have any connection whatsoever to the window? Why should it need to be resized in any way? Although my goal is to make it stretch over the window, it’s just a 3D texture on a quad. The quad’s size is somewhere in the 3D world space.

It seems like my texture should be able to be any resolution and just map to that quad, no matter what happens with the Window. Unless there’s something about a window resize that destroys all the OpenGL context, so you have to load all the texture’s over again.

No, you’re absolutely right. Which means the weird behavior on resize probably has something to do with a bug in how you handle resize.

Whittle your program down to short test program, and somebody will probably be able to point out the error.

For instance, I find it odd that you have no MatrixMode calls in your code. gluOrtho2D should be called when the PROJECTION matrix is active. And where are you initializing MODELVIEW to the identity. And if you are and seeing that you’re passing 0,1,0,1 to gluOrtho2D, why are you sending verts down the pipe with coordinates 0…w, 0…h, where w,h = 1920x1200?

You’ve gotta deal. I’ll try to get the simplest example possible. The trick is it always seems to work until I dynamically update the texture.

Will get back to you folks.
And thanks. :slight_smile:

  • Jeff

Guys, I just got my first clue! I’ve been starting over with the most minimal GLUT project possible. And I what I found out is that if I do NOTHING AT ALL to draw the window, I get exactly the same visual artifacts!* Which means that my real issue is that my drawing routine is failing - maybe something as simple as I’ve set up the camera in a faulty way so I’m not seeing the quad.

Here’s what you can do to see what I’m seeing (abridged):

Init: glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);

void resize(int w, int h)
{
glViewport(0,0,(GLsizei)w,(GLsizei)h);
glutPostRedisplay();
}

void redraw() // do nothing at all!
{
glutSwapBuffers();
}

  • By the artifacts, it looks like either Snow Leopard or more likely the graphics card is actually tiling images for efficiency -> it looks like 128x128 blocks.

OK, I think I’ve solved the problem. It might be overkill. Could you more knowledgeable folk tell me if I really need to do everything? Unfortunately, performance is not great -> I’m averaging about 30 FPS (13-40fps) on a GeForce 8600GT. I wonder if this is a GLUT artifact, because when I don’t update the texture I still only get (30-60fps), while on OpenGL using timer callbacks I get 300fps. Worse yet - it doesn’t seem like the window itself is actually updating every time - the visual effect seems more like 5 FPS, and tearing is extreme.

Anyway, here’s what I now do:. For simplicity, the texture is now of fixed, constant size and is a RECTANGLE. Thanks so much for your encouragement! :slight_smile:

Init:

// Create a new texture!
glGenTextures(1, &ImageID);

// Use a GL_TEXTURE_RECTANGLE_ARB:
glBindTexture(GL_TEXTURE_RECTANGLE_ARB,ImageID); // order?
glEnable( GL_TEXTURE_RECTANGLE_ARB );	// Allow texturing!

glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameterf(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

// Copy to the graphics card:
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA, 
	IMAGE_W, IMAGE_H, 0, GL_RGBA, GL_UNSIGNED_BYTE, (const GLvoid *)ulImage);

====================
Draw:

glMatrixMode (GL_PROJECTION); // needed?
glLoadIdentity (); // needed?
gluOrtho2D(0.0, 1.0, 1.0, 0.0);

// Needed?
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);	// clear the surface
glClear(GL_COLOR_BUFFER_BIT);

// Draw the texture quad:
glBindTexture(GL_TEXTURE_RECTANGLE_ARB,ImageID); // order?
glEnable( GL_TEXTURE_RECTANGLE_ARB );	// Allow texturing!

// Live Upload of latest image:
CreateImage();

glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, // MipMap 0
	0,0,IMAGE_W,IMAGE_H,
	GL_RGBA, GL_UNSIGNED_BYTE, (const GLvoid *)ulImage);	

// Draw it! (texture rect is accessed by real coords)
glColor4f(1.0, 1.0, 1.0, 1.0); // no alpha effect unless blending enabled
glBegin(GL_QUADS);
	glTexCoord2f(0, 0); glVertex2f(0, 0);
	glTexCoord2f(IMAGE_W,0); glVertex2f(1, 0);
	glTexCoord2f(IMAGE_W,IMAGE_H); glVertex2f(1, 1);
	glTexCoord2f(0, IMAGE_H); glVertex2f(0, 1);
glEnd();

Also, for the benefit of posterity, I also needed to employ the advice given earlier: glTexSubImage2d assumes you are giving it a linear buffer. For this to work at less than the full width of the texture, you do need to use the glPixelStorei command as:

glPixelStorei(GL_UNPACK_ROW_LENGTH, MY_IMAGE_W);

I would never have guessed that. Thanks folks.

I now generalized the draw routine so you can either see your entire client area regardless of window size (SHRINK) or expose more of your client area as you enlarge the window (EXPAND). (WinW and WinH are set in the reshape call back.) The actual window draw would end with a glutSwapBuffers(); I’m hoping that reimplementing all this in OpenGL instead of GLUT will boost the performance, since applications do get high frame rates somehow. :wink:

void DrawQuad()
{
int w = IMAGE_W, h = IMAGE_H;
if (WINDOW_MODE == MODE_EXPAND) { w = MIN(WinW,IMAGE_W); h = MIN(WinH,IMAGE_H) ; }

glMatrixMode (GL_PROJECTION); // needed?
glLoadIdentity (); // needed?
gluOrtho2D(0.0, 1.0, 1.0, 0.0);

// Needed?
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);	// clear the surface
glClear(GL_COLOR_BUFFER_BIT);

// Draw the texture quad:
glBindTexture(GL_TEXTURE_RECTANGLE_ARB,ImageID); // order?
glEnable( GL_TEXTURE_RECTANGLE_ARB );	// Allow texturing!

// Live Upload of latest image:
CreateImage();

if (WINDOW_MODE == MODE_EXPAND) glPixelStorei(GL_UNPACK_ROW_LENGTH, IMAGE_W); // Tell OpenGL it's a 2D buffer!

glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, // MipMap 0
	0,0,w,h,
	GL_RGBA, GL_UNSIGNED_BYTE, (const GLvoid *)ulImage);	

// Draw it! (texture rect is accessed by real coords)
glColor4f(1.0, 1.0, 1.0, 1.0); // no alpha effect unless blending enabled
glBegin(GL_QUADS);
	glTexCoord2f(0, 0); glVertex2f(0, 0);
	glTexCoord2f(w,0); glVertex2f(1, 0);
	glTexCoord2f(w,h); glVertex2f(1, 1);
	glTexCoord2f(0, h); glVertex2f(0, 1);
glEnd();

}

No, you’re not even clearing the window so you should expect to see whatever trash the OS or driver wants to give you by default.

Try this:


#include <stdio.h>
#include <stdlib.h>
#define GL_GLEXT_PROTOTYPES
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glx.h>
#include <GL/glut.h>

void checkGLError( const char hdr[] )
{
  int err = glGetError();
  if( err )
  {
    fprintf(stderr, "ERROR %s: %s
", hdr, gluErrorString(err));
    exit(1);
  }
}

void reshape(GLsizei w, GLsizei h)
{
  glViewport(0, 0, w, h);
  glutPostRedisplay();
}

void keyboard( unsigned char key, int x, int y )
{
  // Key Bindings
  switch( key )
  {
    case 27 : exit(0);                    break;
  }
}

void display()
{
  glClearColor( 0,0,1,1 );
  glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
  glutSwapBuffers();
  checkGLError( "display() end" );
}

main( int argc, char *argv[] )
{
  glutInit( &argc, argv );
  glutInitDisplayMode( GLUT_DOUBLE | GLUT_DEPTH | GLUT_RGB ); 
  glutCreateWindow( "window title" );
  checkGLError( "Create window" );

  glutReshapeFunc ( reshape  );
  glutDisplayFunc ( display  );
  glutKeyboardFunc( keyboard );

  glutPostRedisplay();
  glutMainLoop();
}

This should give you a blank blue window with no OpenGL errors.

Hey, next time post your entire GLUT test program. If folks just have to cut/paste/compile, they’re much more likely to do that (and you’re more likely to get help) than if they have to go cook a test program to run the fragments you’ve provided.

Ok, using my version of the test program cooked from your snippets, I get ~1ms/frame (i.e. ~1000 fps). Here is the code. Note that I did comment our CreateImage(), where you change the memory buffer, so your should snap some calipers around that to make sure that’s not what’s eating all your time. Also make sure you disable sync to vblank when you run this to get max frame rate.

Also note for future reference, for even better texture upload performance you should use PBOs.


#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#define GL_GLEXT_PROTOTYPES
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glx.h>
#include <GL/glut.h>

const int IMAGE_W = 640;
const int IMAGE_H = 480;

GLuint ImageID;
char *ulImage;

void checkGLError( const char hdr[] )
{
  int err = glGetError();
  if( err )
  {
    fprintf(stderr, "ERROR %s: %s
", hdr, gluErrorString(err));
    exit(1);
  }
}

void reshape(GLsizei w, GLsizei h)
{
  glViewport(0, 0, w, h);
  glutPostRedisplay();
}

void keyboard( unsigned char key, int x, int y )
{
  // Key Bindings
  switch( key )
  {
    case 27 : exit(0);                    break;
  }
}

void display()
{
glMatrixMode (GL_PROJECTION); // needed?
glLoadIdentity (); // needed?
gluOrtho2D(0.0, 1.0, 1.0, 0.0);

// Needed?
glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // clear the surface
glClear(GL_COLOR_BUFFER_BIT);

// Draw the texture quad:
glBindTexture(GL_TEXTURE_RECTANGLE_ARB,ImageID); // order?
glEnable( GL_TEXTURE_RECTANGLE_ARB ); // Allow texturing!

// Live Upload of latest image:
//CreateImage();

glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, // MipMap 0
                0,0,IMAGE_W,IMAGE_H,
                GL_RGBA, GL_UNSIGNED_BYTE, (const GLvoid *)ulImage);

// Draw it! (texture rect is accessed by real coords)
glColor4f(1.0, 1.0, 1.0, 1.0); // no alpha effect unless blending enabled
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(0, 0);
glTexCoord2f(IMAGE_W,0); glVertex2f(1, 0);
glTexCoord2f(IMAGE_W,IMAGE_H); glVertex2f(1, 1);
glTexCoord2f(0, IMAGE_H); glVertex2f(0, 1);
glEnd();


  glutSwapBuffers();
  checkGLError( "display() end" );

  // Print frame time and FPS
  timeval cur ;
  gettimeofday ( & cur, 0 ) ;

  static double last_time  = 0;
  static int    num_frames = 0;
  double time = (double) cur.tv_sec + (double) cur.tv_usec / 1000000.0 ;

  double elapsed = time - last_time;
  if ( elapsed > 1 )
  {
    if ( last_time )
      printf( " %.3f ms/frame (%.3f fps)
",
              elapsed * 1000 / num_frames, num_frames / elapsed );
    last_time = time;
    num_frames = 0;
  }
  else
    num_frames++;

  glutPostRedisplay();
}

main( int argc, char *argv[] )
{
  putenv( (char *) "__GL_SYNC_TO_VBLANK=0" );

  glutInit( &argc, argv );
  glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGB ); 
  glutCreateWindow( "window title" );
  checkGLError( "Create window" );

  glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

// Create a new texture!
glGenTextures(1, &ImageID);

// Use a GL_TEXTURE_RECTANGLE_ARB:
glBindTexture(GL_TEXTURE_RECTANGLE_ARB,ImageID); // order?
glEnable( GL_TEXTURE_RECTANGLE_ARB ); // Allow texturing!

glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameterf(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

ulImage = new char[ IMAGE_W * IMAGE_H * 4 ];

memset( ulImage, 255, IMAGE_W * IMAGE_H * 4 );

// Copy to the graphics card:
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA,
             IMAGE_W, IMAGE_H, 0, GL_RGBA, GL_UNSIGNED_BYTE, (const GLvoid *)ulImage);

  checkGLError( "After texture create" );

  glutReshapeFunc ( reshape  );
  glutDisplayFunc ( display  );
  glutKeyboardFunc( keyboard );

  glutPostRedisplay();
  glutMainLoop();
}

Thanks so much for all the help guys. I think some of the issue was how fast I got the idle call back. BTW, I was able to confirm in a dramatic way that my eyes weren’t deceiving me - Snow Leopard was updating the window at a much, much slower rate than the program was running. In some cases, I could draw 100 dots before they’d appear on the screen, even though each dot sent a full copy of the screen to the graphics card and drew it.

Guess it’s time to look into PBO’s. :slight_smile:

BTW, I also found that glut doesn’t work from threads at all - any call from a thread causes an instant crash.

Any advice as to how I can learn more of this stuff like you do? The documentation seems pretty poor, especially wrt what effects what or what you need to do before using a certain function.

You guys are the first people who’ve ever helped me with OpenGL, and I’m really, really grateful.

  • Jeff

Just question everything, and post here whenever you have a specific question you can’t figure out or would like some general insight on. Reading and testing things in detail will get you there faster too. But just post your complete GLUT test program when you have problems.

As to GLUT and threads, I may be missing something but I’ve always read that you cannot talk to OpenGL with multiple threads and expect good spec-defined behavior. You need only one thread talking to OpenGL in any multithreaded process. \

About the only think I’ve heard that might be considered an exception to that (though really it isn’t) is you can map buffers in the “GL thread”, fill them in another thread, and then unmap them again in the “GL thread”. Here only one thread is talking to GL proper – the other thread is just filling a memory buffer which doesn’t cause any threading/reentrant issues.

A quick update: I tried the code you provided, and it ran about twice as fast as mine -> about 100 fps at 1920x1200, which correlates to near 1,000 fps at 640x400.

The sync for vblank env variable seemed to have no effect.
I’ll have to figure out why yours is faster than mine - thanks for the help. I’ll check out the fast texture thread with PBOs.

Thanks guys!

  • Jeff

1000fps = ~1ms/frame, 100fps = ~10ms/frame. That’s a whale of a difference. My guess is your monitor is refreshing at 100fps, or something in your system is limiting buffer swaps to 100Hz.

The sync for vblank env variable seemed to have no effect.

Yeah, that’s my guess. That’s an NVidia driver trick on Linux. Dunno how to what the equiv is elsewhere. SwapInterval(0) should do it, but dunno how to do that with a GLUT app. See if you can nail off sync to vblank in your control panel GUI.

I’ll have to figure out why yours is faster than mine - thanks for the help. I’ll check out the fast texture thread with PBOs.

Find out what the vertical refresh rate of your video mode is set to. Probably 100Hz, so you just need to disable sync to blank.

Also, GPU/CPU here is NVidia GTX285 and Core i7 920.