Why use or define texture units and what is the mapping with uniforms and texture units?

Basically, from generating to using a texture unit we briefly do the following steps:

unsigned int texture1;
unsigned int texture2;
glGenTextures(1, &texture1);
glGenTextures(1, &texture2);
glBindTexture(.., texture1);
unsigned char* data = load_image_data();
glTexImage( ... , data);
glUniform1i(glGetUniformLocation(ourShader.ID, "texture1"), 0);
glBindTexture(.., texture2);
data = load_image_data();
glTexImage( ... , data);
glUniform1i(glGetUniformLocation(ourShader.ID, "texture2"), 1);  
glBindTexture(GL_TEXTURE_2D, texture1);
glBindTexture(GL_TEXTURE_2D, texture2);  
//on the shader side, we declare samplers in fragment shader:  
#version 330 core
uniform sampler2D texture1;
uniform sampler2D texture2;
void main() { ... }

As we can see above, we are generating the textureID using glGenTextures(1, &textureX) and then sending the uniform sampler a value using glUniform1i, then what is the role of the GL_TEXTUREx at all, and that also using glActiveTexture? Couldn’t we do something like this:
glUniform1i(glGetUniformLocation(ourShader.ID, "textureX"), GL_TEXTURE0); ?

We’re passing a value 0 to the sampler2D uniform, and then setting glActiveTexture to GL_TEXTUREx, so doesn’t that mean a uniform value of 0 GL_TEXTURE0, 1 is GL_TEXTURE1 and so on, which is exactly why I thought
glUniform1i(glGetUniformLocation(ourShader.ID, "textureX"), GL_TEXTURE0); should be done.

But, if glActiveTexture(…) actually sends the value of the ID generate by glGenTextures(1, &textureX) to the GPU, then why would even glUniform1i be used?
Please explain me this workflow/framework/mechanism openGL is using for using texture using the glActive, glGenerate, glUniform1i functions for generating textures.

Are you referring to GL_TEXTURE_2D or GL_TEXTURE0 here?

glActiveTexture and GL_TEXTURE0 etc exist because that was the simplest way to add multi-texturing (using multiple textures during rendering) to an existing API. The alternative would have been to change all of the texture-related functions to accept an additional parameter to identify the texture unit.

The enumerants for texture units are consecutive, so GL_TEXTUREn is equal to GL_TEXTURE0+n. If you want to use more than 32 texture units, you have to use the latter form as GL_TEXTURE32 doesn’t exist (and can’t be added, because it would have to have the value 0x84E0, but that’s already used for GL_ACTIVE_TEXTURE).

Essentially, there isn’t any point in including the offset in the uniform value. It would just complicate matters, particularly as glUniform1i isn’t limited to textures. In hindsight, it would have been better if glActiveTexture just used texture unit indices rather than enumerants, but what’s done is done.


The GPU is just going to want “raw” (zero-based) indices, so using enumerants would require the driver to subtract GL_TEXTURE0 from the value (but only when the uniform variable is a sampler type, not when it’s an int).

It doesn’t. glActiveTexture causes subsequent texture-related functions (e.g. glBindTexture, glTexImage*, glTexParameter, etc) to operate on a specific texture unit. glBindTexture associates a texture with the active texture unit. glUniform1i associates a texture unit with a uniform variable.

With the ARB_bindless_texture extension (not a core feature in any version), you can assign textures directly to uniforms (and also store texture handles in uniform blocks and SSBOs, which isn’t possible with the standard mechanism).

Also, with the direct state access (DSA) functions in 4.5 (the ones named glTexture* rather than glTex*), you don’t need to bind textures to texture units to manipulate them; you just use the texture name. But they still need to be bound to texture units to be accessed by the shader (unless you use the bindless texture extension).

How else are you going to control which texture corresponds to each variable?

1 Like

So, we could say the texture unit is a placeholder, and after the glBindTexture using the glTexImage actually uploads data to the GPU, and using the glUniform1i(glGetUniformLocation(ourShader.ID, "texture2"), 1); actually only tells the GPU that the texture unit to which we had uploaded is actually 1? Is my analogy correct?

If my above analogy is correct, what would happen if we executed the following:

glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(glGetUniformLocation(ourShader.ID, "texture1"), 5); //#used 5 instead of 0 for GL_TEXTURE0

I’d assume we’d not rendered the texture at all? Am I correct? Or, could we remap the values for GL_TEXTURE0 to be referred by consecutive numbers starting from say 100?

The glUniform1i call tells the GPU that the variable texture2 refers to texture unit 1. When you actually render something, any texture-related functions in the shader which use texture2 as the sampler parameter will read from whichever texture is bound to texture unit 1 at the time of the draw call.

Not unless you’d bound a valid texture to GL_TEXTURE5 elsewhere in the code.

Also, bear in mind that creation and initialisation of textures is quite distinct from access. E.g. in the above code, glActiveTexture(GL_TEXTURE0) makes texture unit 0 the active texture unit, glBindTexture(GL_TEXTURE_2D, texture1) binds the name stored in texture1 to the GL_TEXTURE_2D target of the active texture unit (and actually creates the texture object if the name doesn’t already refer to a texture object), glTexImage2D(GL_TEXTURE_2D,...) uploads data to the texture bound to the GL_TEXTURE_2D target of the active texture unit. The glUniform1i call associates the variable texture1 in the current program object with texture unit 5 (this will generate an error if there is current program object). This call isn’t affected by any existing texture state; provided that you bound a valid texture to texture unit 5 prior to executing any draw call using that program object, there wouldn’t be a problem.

It’s important to consider what state is stored where. E.g. the active texture is stored in the context (glActiveTexture), as is the set of textures bound to the various targets of the various texture units (glBindTexture). Texture data is stored in the texture object (glTexImage*), as are sampling parameters (glTexParameter*); the active texture unit and the bindings of textures to texture units (and targets) serve only to determine which texture is affected. Default-block uniform variables (glUniform*) are stored in the current program object (set by glUseProgram).

But, doesn’t this mean that calling two consecutive glBindTexture call would just overwrite the data that is bound to the active texture unit, because by default the active texture unit is 0, but how is this not overwritten, as shown in my code:

This doesn’t produce any error, not load just one texture, it loads both the textures.

The only thing that’s overwritten is the binding. Textures continue to exist after they’re unbound. Each time that glTexImage* is called, it uploads data to whichever texture is bound to the active texture unit. In the above code, the first glTexImage* call occurs when texture1 is bound, the second when texture2 is bound. So they’re uploading data to different textures.

1 Like

So, that(glBindTexture) somehow emulates almost the behaviour as if there was a buffer for binding and all the glTexImage* calls being stored inside that buffer, and when glActiveTexture is called, that buffer is bound to that texture unit, and everything follows? Almost like a VAO?

All of the significant state is stored in a texture object. A texture unit is nothing more than a set of references to textures, one for each target.

The interface is somewhat complicated due to backward compatibility. OpenGL 1.0 didn’t have glBindTexture or texture objects. There was only one texture of each type (1D or 2D), and if you wanted to change the texture you had to upload new data with glTex[Sub]Image*. OpenGL 1.1 added texture objects; rather than changing all of the texture-related functions (or adding many new functions) to accept a texture name as a parameter, it added glBindTexture. So existing functions continued to operate on “the 1D texture” or “the 2D texture” but you could change which texture was “the” texture. But you could still only use one texture at a time.

OpenGL 1.3 added support for multi-texturing (combining multiple textures to generate the final fragment colour). Rather than changing existing functions to accept a texture index (or adding many new ones), it added glActiveTexture.

So whereas in 1.0 GL_TEXTURE_2D meant the (one and only) 2D texture, it now means the 2D texture which is currently bound to the texture unit which is currently active, and you have glActiveTexture to control which unit is active and glBindTexture to control which texture is bound to (a specific target of) the active texture unit. Note that the concept of “active texture unit” is only relevant to client-side texture-management functions; it doesn’t affect rendering (every texture unit reference by a sampler-type uniform variable is active for rendering).

OpenGL 4.5 provides a somewhat simpler interface in the glTexture* functions, which reference textures by name and don’t care about the active texture unit or which textures are bound to which texture units, and also glBindTextures, which binds textures to texture units without caring which texture unit is active.