I am currently implementing a feature that allows loading a new scene in a background thread while the primary thread continues to draw. To do this I have created two contexts (c-main and c-load). I have a class within my project that encapsulates an entire scene (all vbos, textures, shaders, etc, everything that is currently loaded into the opengl context which is used to draw that scene). When I want to load a new scene, I kick off creation of a new scene to a separate thread and make c-load the current context on that thread. Once loading is complete, I swap contexts turning c-load into c-main on the main render thread and the previous c-main into c-load (empty and not current on any thread until next scene is loaded).
This works for the most part, but I have noticed that when loading textures and mipmaps within the secondary thread, my framerate will drop substantially causing a jerkiness to be visible.
Therefore my question to anyone generous enough to be reading this is: Why do I experience this degradation on my primary render thread when loading these textures into a separate opengl context on a separate thread?
After some investigation the theory I have come up with is that since all opengl calls are added to opengl’s command queue regardless of what cpu thread made the call, that once the
glTexImage2D calls are executed by the drivers, all other commands including
glDrawElements being called from the main thread from within a separate context, are blocked until they finish generating mipmaps or uploading data. Is this correct? Or have I wildly misunderstood something?