NVIDIA glActiveTexture bug?

Hey,

I’m wondering why the following code generates an OpenGL error (unsupported texture unit)…GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS returns 192, function has error with glActiveTexture(GL_TEXTURE0+32)…The OpenGL man pages states for glActiveTexture:
“Specifies which texture unit to make active. The number of texture units is implementation dependent, but must be at least 80. texture must be one of GL_TEXTUREi, where i ranges from 0 - (GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1). The initial value is GL_TEXTURE0.”

void Simple3DEngine::ClearTextureState()
{
	int maxTextureUnits = 0;
	glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &maxTextureUnits);

	for (int i = 0; i < maxTextureUnits; i++)
	{
		cout << i << endl;
		glActiveTexture(GL_TEXTURE0+i);
		glBindSampler(i, 0);
		glBindTexture(GL_TEXTURE_1D, 0);
		glBindTexture(GL_TEXTURE_2D, 0);
		glBindTexture(GL_TEXTURE_1D_ARRAY, 0);
		glBindTexture(GL_TEXTURE_2D_ARRAY, 0);
		glBindTexture(GL_TEXTURE_3D, 0);
		glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
		glBindTexture(GL_TEXTURE_RECTANGLE, 0);
	}
	glActiveTexture(GL_TEXTURE0);
	glDisable(GL_TEXTURE_2D);
	glDisable(GL_TEXTURE_1D);
	glDisable(GL_TEXTURE_3D);
	glDisable(GL_TEXTURE_2D_ARRAY);
	glDisable(GL_TEXTURE_1D_ARRAY);
	glDisable(GL_TEXTURE_CUBE_MAP);
	glDisable(GL_TEXTURE_RECTANGLE);
}

Jonathan

I forgot to say that I’m using version 314.22 using a GTX680m and Windows 8 Pro 64-bit.

#define GL_TEXTURE29 0x84DD
#define GL_TEXTURE30 0x84DE
#define GL_TEXTURE31 0x84DF
#define GL_ACTIVE_TEXTURE 0x84E0 // 0x84E0 is same as writing GL_TEXTURE0+32

i think that will explain to you, why you’re getting errors. there are only enumerators for 32 texture units(0-31).
and there’s the topic about those limitations: http://www.opengl.org/discussion_boards/showthread.php/176351-How-to-set-up-more-than-32-Textures

I’m wondering why the following code generates an OpenGL error (unsupported texture unit)

What line exactly is it that generates that error? Also, where are you getting that error from? Is that a real glGetError error, or something else?

i think that will explain to you, why you’re getting errors. there are only enumerators for 32 texture units(0-31).

The number of enumerators is not the same as the number of available textures.

so is it legal to pass “GL_TEXTURE0+32”(which has same value as GL_ACTIVE_TEXTURE) or above as an argument to glActiveTexture? i didn’t try it myself.

or you think there’s maybe something in surrounding code?

I’m using ARB_debug_output and gdebugger…they both catch the error (invalid texture unit).

Nevermind that I’m using glEnable/glDisable with GL_TEXTURE_*…I was just rereading that this is deprectated…great reminder! Anyhow, glActiveTexture is indeed causing the error.

From the 4.0 spec…

“ActiveTexture generates the error INVALID_ENUM if an invalid texture is specified. texture is a symbolic constant of the form TEXTUREi, indicating that texture unit i is to be modified. The constants obey TEXTUREi = TEXTURE0+i (i is in the range 0 to k - 1, where k is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS).” pg. 173.

“All active shaders combined cannot use more than the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS texture image units. If more than one pipeline stage accesses the same texture image unit, each such access counts separately against the MAX_COMBINED_TEXTURE_IMAGE_UNITS limit.” pg. 89.

On page 371, MAX_COMBINED_TEXTURE_IMAGE_UNITS is guaranteed to be at least 80. I got 192 from my own query. Of course, it fails at 32…well below the spec. I understand that we don’t need more enums (GL_TEXTURE1045, etc.), so glActiveTexture might be using an old check…

I guess I should clarify what I’m trying to do. It’s basically just cleaning up after arbitrarily executing a block of OpenGL calls. I’m not actually trying to use as many texture units as I can, but it seems that the following code should execute without error according to the spec:

int maxTextureUnits = 0;
glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &maxTextureUnits);
for (int i = 0; i < maxTextureUnits; i++)
{
glActiveTexture(GL_TEXTURE0+i);
}

Now, I’m thinking that the normal way of using the GL is to just use glActiveTexture(…) and glBindTexture(…) and to set the sampler in the shader program the texture unit you used. There’s no cleanup afterwards. If that’s the case though, it looks like I should be able to use texture unit 79 if I wanted to and set my sampler to 79 and it should just work. Unless there’s something here that I’m not considering…

But, if you’re using the compatibility context, there might be merit to making sure that all the texture units are disabled and unbound to ensure that the fixed function code is not affected by using the new pipeline.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.