I have 2 threads, the first (off-screen rendering thread, OSRT) renders some data to a texture and the other (main rendering thread, MRT) renders something using this texture to the default framebuffer. The contexts are shared using wglShareLists (OSRT’s rc is created using the same dc as MRT’s rc, I hope it’s not a problem). OSRT never alters the default frame buffer, it renders to a texture with an FBO.
I want MRT to run in realtime framerates. It’s not problem if sometimes OSRT updates the texture much slowly. To prevent rendering and using the same texture at the same time, and to prevent any stalling of MRT, I use 3 textures, it’s similiar to triple buffering. OSRT renders to t3, when finishes, then swaps t2 and t3. MRT uses t1 for it’s rendering, when finishes, swaps t1 and t2. In my case swap of t1 and t2 simply means that the t1 and t2 GL names are swapped. The swaps are protected with a critical section.
I just can’t make the whole thing work (fast MRT and possibly long calculations in OSRT). Now MRT is only fast when the calculation in OSRT is relatively simple.
I have some questions:
Is glFinish applied only to the commands of the calling thread’s context?
Is SwapBuffers do an explicit finish or someting like that? It seems that it does, however I found that on my machine glFinish makes a busy-waiting (burning 100% of a single core), while SwapBuffers use only 1-2% of the processor time.
How can I effectively determine if rendering to/rendering from a texture is finished? Now for OSRT I use glFinish and do the swap of textures after that. For MRT I do the swap of textures after SwapBuffers. I want to support NV and ATI cards too, so I don’t want to use fences.
In my case, it seems that the SwapBuffers waits for OSRT’s pending GL calls too, becouse it hangs a lot of time. I made some experiments, so set the OSRT’s priority to lowest, while SRT’s priority to time critical (vsynch on, so it’s not dangerous), but not helped anything.
Thanks for any help