Modifying textures using glTexSubImage2D()


The goal of my project is to manually edit depth and color buffer values. The first version of this project used glDrawPixels() and it worked fine, but it was also painfully slow. To speed up the process, some people suggested that I should use glTexSubImage2D() along with FBOs instead of the rather slow glDrawPixels() to edit RGB or zbuffer values. However, I can’t seem to successfully copy data to textures using glTexSubImage2D().

The code starts by initialising the FBOs along with its textures. I’ve decided to use a texture for the depth buffer as I need to read and write to it every frame.

         glBindFramebuffer(GL_FRAMEBUFFER, FBOtest);
         glBindTexture(GL_TEXTURE_2D, TEXtest);
         glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, 1920, 1080, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);        

         glBindTexture(GL_TEXTURE_2D, DEPTHtest);
         glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1920, 1080, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);        

         glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, TEXtest, 0);
         glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, DEPTHtest, 0);
         glBindFramebuffer(GL_FRAMEBUFFER, 0);

Later on, the program enters into a rendering loop where the color and depth textures should be edited following some pattern. tabDepth is an array representing the depth buffer values for each pixel and needs to be copied to the depth texture.

      glBindFramebuffer(GL_FRAMEBUFFER, FBOtest);

      ... // tabDepth editing algorithm

      glReadPixels(900,500,1,1,GL_DEPTH_COMPONENT,GL_FLOAT, &depthpixel); //Read random pixel
      glBindFramebuffer(GL_FRAMEBUFFER, 0);
      printf("Depth = %.3lf 

In this case, for every frame, depthpixel has a value of 0.000, while tabDepth[][] is worth 0.35f. Therefore, glTexSubImage2D() seems to either fail to write correctly or writes somewhere else and glReadPixels() doesn’t read the correct buffer. I suspect that the problem lies in those lines, but I’m having trouble figuring it out.

Does anyone know where my error might be?


There might be synchronisation issues with updating the texture while it’s attached to the framebuffer.

Try detaching it before calling glTexSubImage2D() then reattaching it before calling glReadPixels().

That’s just a guess, though.

[QUOTE=GClements;1266814]There might be synchronisation issues with updating the texture while it’s attached to the framebuffer.

Try detaching it before calling glTexSubImage2D() then reattaching it before calling glReadPixels().

That’s just a guess, though.[/QUOTE]

I’m not entirely sure if this is what you meant, but I tried detaching, writing and then re-attaching the texture using the following code and it seems to work. However, I’m getting lots of performance issues by doing this.

      glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, 0, 0);
      glBindTexture(GL_TEXTURE_2D, DEPTHtest);
      glReadPixels(100,100,1,1,GL_DEPTH_COMPONENT,GL_FLOAT, &depthpixel);

Could you specify if this is what you meant or if I’ve done something completely different?


EDIT: After testing that code, I found out that it crashes 90% of the time (due to access violation) before it’s able to read and print. So that either means that the code I added is broken or that the problem has always been there and this fix made it appear.

That’s what I meant.

There might be some combination of glMemoryBarrier() flags which achieves the same result (or there might not), but even if there is, there’s no guarantee that performance will be any better.

OpenGL is designed around pipelining: most OpenGL functions simply append a command to the queue and return without waiting for it to complete.

But functions which copy data from the implementation into application memory have to wait until the correct data is actually present within the implementation before they can copy it back, which means waiting for any pending commands to complete. In other words, such commands result in a pipeline stall.

If you’re only calling glReadPixels() to verify that everything is working as expected (and not because you actually need the data), measure the performance without it before concluding that there’s a problem.

If you actually need the data in application memory for whatever reason, you should consider whether you’d be better off implementing everything in software. Video hardware is designed primarily around data flowing from the application to the video hardware to the monitor. Having data flowing in the other direction is a possibility rather than a priority.

If you do need to be reading back data from the implementation, performance will be improved if you don’t need to read it back immediately after it has been generated, but can interleave operations like:

initiate operation A -> initiate operation B -> read result of A -> read result of B

This way, at the point that you try to obtain the result of operation A, it’s more likely to already be available. And if it isn’t available at that point, once it does become available the implementation can get on with B immediately after returning it.

If you can’t do that, then you end up using either the CPU or the GPU but not both at the same time. Either the CPU is idle waiting for the GPU to complete a task so its result can be obtained, or the GPU is idle because the CPU will be dealing with the result of the previous operation before it can provide the GPU with more work.

I’ve tested my code with and without glReadPixels() and I’ve noticed little to no difference in my frame rate. While calling glReadPixels() may not be useful at this instant, I will have to call it once per frame to retrieve the depth buffer from the GPU before editing it. glTexSubImage2D() seems to be the one slowing down the frame rate and from what I can see right now, it achieves the same frame rate as using glDrawPixels().

[STRIKE]As for the nvoglv32.dll access violation crashes, they always happen during the first loop iteration at the glTexSubImage2D() call, leading me to believe that it tries to write in texels that are out of bounds. I’ve added a sleep command after initialising the textures and before entering the loop and the crashes happen ~5% of the time now.[/STRIKE]
EDIT: I’ve completely fixed the crash by using a dynamic 1D array instead of a dynamic 2D array. The random crashes were most likely due to how memory was allocated for the 2D array.