context sharing and synchronization

hello,

I have two gl (shared) contexts, each current in a different thread.

In one thread I create and upload data to a pixel buffer. Then I create some textures and fill them with parts of that pixel buffer. At the end I call glFinish() and set a boolean variable (ready = true) and release the gl context.

On the main thread I draw an animated quad. If the ready variable is true then texture map the quad. Everything works except that the animation is affected by the other thread. I mean, gl commands on the worker thread are affecting the main thread.

Moreover, in my intel hd 3000, everything works smooth. in my gt520 the worker thread really affects the smoothness of the animation.

So, how to properly execute gl commands in the background without touching the main thread and its rendering/animation?

Thank you!

PS: code on request

Insert a fence, call glFlush and then poll on the completion of the fence.

In the worker thread I have this inside a loop:

glBindTexture(GL_TEXTURE_2D, textures[i]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, blockWidth, blockHeight, format, type, reinterpret_cast<GLvoid*>(offset));
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);

GLsync sync = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
glFlush();
syncs[i] = sync;

Note that texture creation is before of that loop and syncs[i] are with 0.

And in the render thread:

if(syncs[i]!=0)
{
GLenum result = glClientWaitSync(syncs[i], 0, 0); //GL_SYNC_FLUSH_COMMANDS_BIT, GL_TIMEOUT_IGNORED);
if(result==GL_ALREADY_SIGNALED)
{
// …
glDeleteSync(syncs[i]);
}

And as you can see I already tried combinations of those parameters.

In terms of behavior, it’s all ok, but in terms of smoothness no.

You mean poll the fence on the worker thread like:

// for(;:wink:
// do work
// fence
// flush
// while not signaled wait

[QUOTE=promag;1240934]You mean poll the fence on the worker thread like:

// for(;:wink:
// do work
// fence
// flush
// while not signaled wait[/QUOTE]

No, I meant exactly as you described in your previous post. In my understanding this is how it’s supposed to work, maybe somebody with more insight on the NVidia driver can comment.

Because it seems one of these happens:

  • swap buffer on the rendering context is forcing something like glFinish in the worker context
  • commands that usually stall the pipeline are stalling both contexts

However, I realize that I’m trying to do the same amount of work independently of the GPU.
So, how I can get hardware capabilities so I can upper limit the worker load?

There was a recent post on the OpenGL.org main page recently advertising a presentation called “Optimizing Texture Transfers”. Maybe it could be helpful:
http://nvidia.fullviewmedia.com/gtc2012/0515-J2-S0356.html
http://developer.download.nvidia.com/GTC/PDF/GTC2012/PresentationPDF/S0356-GTC2012-Texture-Transfers.pdf