Texture loading in separate thread

I have a large texture (~6mb) that takes 4-5 seconds to load. I would like to load this in a separate thread, but I don’t seem to be able to load it. When I call glGenTextures() I do not get a valid ID back. I am sure someone knows how to do a parallel texture load.

TAI

I have been multithreading OpenGL stuff for quite some while, and unless there is some backhanded trick that I have absolutely no clue about, I am afraid that texture is going to have to be loaded in the same thread as the rest of your OpenGL stuff. You can however use another thread to load the texture from bmp and all that, but actually doing the glTexImage() or glBuildNDMipmaps() those will have to be called by your “OpenGL thread”…

Originally posted by beaver:
[b]I have a large texture (~6mb) that takes 4-5 seconds to load. I would like to load this in a separate thread, but I don’t seem to be able to load it. When I call glGenTextures() I do not get a valid ID back. I am sure someone knows how to do a parallel texture load.

TAI[/b]

You can’t upload a texture to the graphics card while you do other things with it too.
That’s why that you gain nothing when try to render from more than one thread.

There’s more to it than that, I have tried loading textures (accidentally) with the “other thread”, and it just doesn’t work, even though I had my “OpenGL thread” blocked (not doing anything). I think OpenGL is picky about which thread its functions are called in.

I have the same problem…You solved it?

Byez, Emanem! :smiley:

Every good book will tell you, that you cannot simply setup OpenGL in one thread, and also use it in another thread.

Before using OpenGL in a thread, you have to make the OpenGL context current in that thread. If you don´t do that, you will get lots of errors.

And that context can only be current in one thread at a time, so you cannot render something and upload a texture simultaniously.

However, the worst part of loading a texture is usually the disk-access. And THAT you can do in parallel. Simply load your data in another thread, process it (decoding, etc.) and save the data in RAM. Then, set a flag, and let the other thread upload it the next frame.

You can even speed this up, by storing the data in exactly the format, you want OpenGL to store it. Then the card does not need to process it anymore.
For example DXT compression can be used here (DDS file format).

To make it even more parallel, you can also precompute the mipmaps by yourself (not gluBuildMipmaps, or automatic creation) and then load one level of the texture, each frame, until all mipmap levels are uploaded. This way you don´t upload the whole chunk at once.

That´s all the advice i can give you.

Bye,
Jan.

It is a pretty eery thought, but I was thinking about this issue in the car on my way back home 15 minutes ago :confused: :eek:
I get online and see someone posing that question in the forums and freaked out :stuck_out_tongue:

Can you read my mind beaver? :smiley:

Why can’t you have 2 wglShareList’d contexts, one context for your main thread, one for your background thread? Upload textures in GL on one context and then have a synchronization object to tell you when they’re done so you can use them on your main thread… I’d expect this to work, yes?

Of course you can have a context for each thread, or at least you can do call OpenGL functions from several threads. However, certainly the driver will block any other OpenGL commands, as long as one command gets executed.
So, even if you had 2 or more processors, meaning you can do true multithreading, you wouldn’t be able to upload AND render at the same time. This would make drivers too complex, and it wouldn’t gain you anything.

So, where is the problem to upload the texture at the beginning of a frame ? I told you, you can upload it piecewise, by uploading one mipmap level at a time. I am pretty sure this should make it possible to keep lags at a minimum.

Just try it out.

Jan.

Do you mean that If I have 2 threads, each with its own context, they can’t issue 2 OpenGL commands at the same time even if they operate on 2 separate contexts?

Byez, Emanem! :wink:

Originally posted by Emanem:
[b]Do you mean that If I have 2 threads, each with its own context, they can’t issue 2 OpenGL commands at the same time even if they operate on 2 separate contexts?

Byez, Emanem! :wink: [/b]
I think a lot of people have trouble understanding what OpenGL is.

OpenGL is a specification. It’s a document that describes behavior for other companies to study and create there own implementations, drivers, software renderers, and so on.

NO WHERE in the GL spec does it say anything about not beeing able to do what you are discussing here.

To correct Jan’s post, it all depends on your implementation of OpenGL.

Some companies may have very smart drivers that wouldn’t block other threads running GL, and some companies simply don’t care to offer this feature.

Most cheap consumer cards don’t offer this cause games don’t need it. Some high end professional cards support this kind of thing.

There’s more to it than that, I have tried loading textures (accidentally) with the “other thread”, and it just doesn’t work, even though I had my “OpenGL thread” blocked (not doing anything). I think OpenGL is picky about which thread its functions are called in.
That type of behavior can depend on the OS as well. Like on Windows, you have to make the GL context current on the thread before you make GL calls, else the GL calls get ignored.
In this case, this is explained in Microsoft’s own documents - MSDN

Hope this clears up things for everyone learning GL.

beaver, you might consider the task of loading big textures as a generic disk i/o operation, and develope a part of your engine to handle this task specifically. but it could be used to load anything, a chunk of a terrain, for example. it could be setup to feed various subsystems, such as graphics. without a tight coupling, youll have a generic loader that could be used for any load-on-demand kind of data. and, not being dependent on opengl, it will be thread safe.

V-Man: Of course you are right. OpenGL is only a specification and as such, it can be implemented in a way, which allows this kind of task.

HOWEVER, i was assuming, that beaver is working on a typical consumer card. And such cards are, of course, optimized for gaming, not for CADs or simulators, or whatever else. And for typical games it is NOT necessary, to be able to load textures (really) simultanously.
Therefore i am assuming (i admit, i don´t know it), that current consumer cards are simply not able to do this, because it would be quite complex to implement this, and as i already said, it isn´t really necessary for games.

Therefore i simply tried to give advice how to handle this situation. It is nice, if OpenGL´s spec allows implementations to be able to work in parallel, but that´s simply useless, as long as it is not implemented in the hardware/drivers that you are using !

If you want to be really sure, if gfx cards support it (or WHICH cards support it), than you would need to ask some driver-developer. However, i am pretty sure, there is no such consumer hardware.

Jan.

FWIW, this exact scenario of loading textures into context A on one thread while rendering with shared context B on another thread was discussed at Apple’s WWDC 2003 GL session. So it is at least supported in Apple’s GL implementation.

I’ve solved this trouble with multithreading in this way:

One thread (not the main with OpenGL context) loads textures from disk, creates even mipmaps into memory then posts a custom message to the toher thread’s windows when he has just loaded a texture.
The Second (and main OpenGL) thread is the one who handles the window. When he receives the custom “a texture has been loaded” message he gets the first texture loaded and actually calls OpenGL functions to store it in OpenGL context.

All this stuff works perfectly…no need of 2 contexts etc etc.
But I must admit that the fact that without a context OpenGL doesn’t operate is quite a pain in the ass for tricky things…

Really, thnx for help, Emanem! :smiley: