I’m using OpenTK in Windows.
I have an app with a user-interface, and I want to run a background thread for continuously rendering to a FrameBuffer Object.
Every rendered frame, I want to display (different parts of) this rendered texture in one or more controls in my UI.
Creating a render-thread is not a problem (making sure the context is only current in one thread at a time).
But how do I synchronize so that when the background thread has rendered a frame, OpenGL-controls in my UI-thread display (different parts of) the rendered texture ?
I assume first of all I need to wait for the GPU to finish rendering, and then somehow let the OpenGL-controls in the UI-thread draw a control-filling quad with the wanted coordinates in the rendered texture.
So does anybody know how I actually should approach and synchronize all this ?
not done anything like this, but to me an event based system seems like one approach you could take.
So use a listener that the render and ui class register with. render fires an event when its has rendered the frame and the listener informs the ui.
I believe OGRE uses a event system for rendering frames, might be something you could look at if you look at this type of approach.
Okay, so say my renderthread has it’s own window and context.
I render, then I have to wait until it’s done using an awkward glFinish (the only thing happening on the main thread is just UI work), and then I fire an event notifying the UI thread a frame has finished. The UI thread binds the texture (in it’s own context) and draws a textured-quad with the desired coords to the output-control.
Something like that ?
take a look at glFenceSync and related functions or the ARB_sync extension for a way to avoid glFinish.