Okay, I’m revisiting this subject as I had abandoned it only to need it desperately again.
I’m trying to run animation in a main thread while loading textures in the background in a separate thread. The only GL calls I’m making in the second thread pertain to generating GL textures etc… But the texture IDs that get assigned (even the very first texture generated) are like 95454216. And obviously these don’t work - I just get white.
So, I’ve heard some people say that you can’t do GL stuff in separate threads and use it together, is that true? Can I not generate texture IDs and call funcs like gluBuild2DMipmaps() or glTexImage2D() and then use the products of these in my main thread? If it is possible, how?
wglShareLists will allow you to share display lists and texture names between threads.
Each thread must have its own rendering context to even do simple things like load textures. This is the root of your texture ID problem. However, there is then a further issue – seperate rendering contexts don’t share texture IDs, so they still won’t show up. What you can do is load the files, and create the actual textures in the “OpenGL” thread. The actual texture creation doesn’t take very long at all, so this should still work.
Multithreading OpenGL usually isn’t worth it, and unless you are running it on a multi-processor system it will often decrease performance. Having an “OpenGL” thread and a “computations” thread or something like that can have benefits, but in my (short) experience getting two different OpenGL threads to behave is a nightmare.
Okay, I’ll try passing the pixel data back to the main thread and creating the texture names there… we’ll see how it impacts performance. If it impacts it too much, I’ll have to go with the wglShareLists idea.
Okay, the idea for just letting the “main” thread allocate create the textures didn’t work well… there’s a significant pause at texture load because the textures are quite large. When it’s done in a separate thread, it goes smoothly. So I want to use the separate thread idea if at all possible.
That having been said, I tried the wglShareLists thing and it didn’t work… but that could be because I’m doing something wrong. Here’s my question:
In Windows, to do basic texture creation, do you have to go through the whole setup process of a rendering context just to generate textures? Also, if so, can the HDC you use as the base for your rendering context be a memory DC? That is, instead of actually having to do something like GetDC(hWnd) can you do:
hDC = CreateCompatibleDC(hSomeDCThatHasAlreadyBeenSetupForOpenGL);
? Will that work?
To use wglShareLists and share between 2 threads, you do need 2 seaparate rendering contexts. You also need to call wglMakeCurrent when each thread is doing something gl related or it will error (essentially skipping whatever gl calls you make while that context is not current).
There is an example on msdn or in the documentation that comes with Visual Studio and other Visual MS products. The name of the article is “OpenGL VI: Rendering on DIBs with PFD_DRAW_TO_BITMAP”. If you look at the section called “CGL::Create” it gives you a function that will give what you are probably looking for. And you should have no difficulty using that with your thread rather than a new CWnd or other window.
Hope that helps.
Well, I for one have had enough… After thinking about it, even if I do successfully share texture names between my threads, the fact is, my texture loading routine takes a fair amount of time. In that time, my other thread will need to draw (requiring a wglMakeCurrent() call). After which, control will pass back to my texture generating thread. And there’s no way to predict at which point it will re-enter the other thread. The only way to control this that I can think of is to use a mutex to cause the main thread to wait until the texture generation is done - thereby defeating the ENTIRE PURPOSE!!!
I’m fed up! Screw OpenGL! I’ve been strictly loyal for a couple of years now, but I’ve had enough. I’m tired of this byzantine API bullcrap. I’m going with Direct3D.
Shouldn’t be that difficult. You load texture files. Not gl functions. Parse them. Not opengl. Only gl calls you will have are glTexImage or glGenLists and a few others… just have a wglMakeCurrent just before each. What’s the big deal?
That’s exactly what I’ve done. It’s just glGenTextures and a few others like you said. But because of the nature of my app, I have to load large textures. And that’s really not the problem either. The real problem is that I have to load large textures broken into small chunks. Which means lots of calls to glTexImage etc… This causes a noticable “hiccup” when the rendering loop has to wait for this to complete. To do the multithreading, I’d have to put wglMakeCurrent in front of every one of those function calls, and even then it wouldn’t be 100% safe because the thread could transfer right after the wglMakeCurrent call and then return having had wglMakeCurrent called elsewhere.
But even if I could ensure that thread execution would continue uninterrupted until all the needed GL calls were finished, that would defeat the purposes of multithreading. If my main rendering loop has to wait for a mutex or something to free up, then I’ve just lost the entire benefit. I might as well be making the calls from the main loop.
Then you could suspend the thread until you need it, or allow the animation thread to display only when you post a thread message from the texture loading thread. It seems as though there are many ways you could do it.
It seems you should take a specific limited number of textures or sub textures and load them during the time between a frame. Rather than allow as many as possible before the other thread just takes control. You need to control the threads to do as you want not let them control you!