glGenTextures/DeleteTextures performance

Hi,

From a performance point of view and to manage dynamically a very large set of texture objects, is a succession of glGenTextures(1,…) and glDeleteTextures(1,…) are sufficiently efficient to avoid the use of a large texture name array (generated once) with my own name reallocation manager ?

Thanks

Could you explain more of what you are trying to accomplish?

Brian

If they are efficient? You do realize that it depends on the gl drivers?

The first one just generates names like this

for(; texname<count; texname++)
yourarray[i]=texname;

The second one does memory allocation related work and depends on the hardware/drivers.

I’ll try to be a bit more precise (english is not my language )

graphicsMan: I have developed a texture manager upon the texture object manager of OpenGL. It distributes texture object names from a unique array that was allocated via glGenTexture at the beginning or from a stack that contains the previously released names. Thus, I perform a call to glDeleteTextures only once at the end of the application. But when a name is reattributed to a new texture (from the stack of freed ones) I suppose that OpenGL executes internally something that is equivalent to glDeleteTextures.

So, After a code review I ask to myself if such a mechanism is not (finally) redundant with the OpenGL one. I mean, should I let OpenGL entirely manage each texture (with a call to glGenTextures(1,…) for each created texture then a call to glDeleteTextures(1,…) when releasing one texture object)? I postulate that such a reiteration of glGen/DeleteTextures can be ineffective…

V-man:

for(; texname<count; texname++)
yourarray[i]=texname;

Is texname always incremented? Are the released names not redistributed by the subsequent calls to glGenTextures?

THANKS !

Hmmmm, I’m not sure. I know that in DirectX you can allocate MEMORY for a texture, and then play with the texture at will. This way, if you could reuse the same amount of memory (say the texture is the same format and same size), then it is much more efficient to not deallocate the texture.

In OpenGL, I don’t know if there is a mechanism for doing this. I believe that what you describe doing is redundant.

Brian

OK…

In OpenGL, I don’t know if there is a mechanism for doing this. I believe that what you describe doing is redundant.

I guess this is defined at the driver level, as the memory allocation mechanisms can differ between implementations…

As I do not consistently check that the substitute texture share the same size than its predecessor, there is effectively good chance that the code was redundant.

thank you Brian

Cyril.

>>>Are the released names not redistributed by the subsequent calls to glGenTextures?<<<

From what I understand, no implementation does redistributing.
And also, there is no need to since we are talking about a 32bit integer here, so it is unlikely that the simple incrementation will cause trouble.

As for the texture memory reusing business, if you use glTexImageXD, where X is 1, 2, or 3, then most drivers will reallocate memory.

Use glTexSubImageXD instead.

From what I understand, no implementation does redistributing.
And also, there is no need to since we are talking about a 32bit integer here, so it is unlikely that the simple incrementation will cause trouble.

You’re absolutely right… A 32 bit counter should be sufficient for a bit of time

Thanks !

Cyril.