More than 8 TMU's on NVIDIA GL on XP

We have some code that is getting an error when we try to use more than 8 TMU’s on a GF8800. The doc says this card has 32 TMU’s, how do we get to them from GL to be able to do >8 layers stacked in one pass ?

(specifically we get an error in our call to glClientActiveTextureARB, when we pass GL_TEXTURE0_ARB + tmu and ‘tmu’ > 7 ).

You can bind 32 texture images but there are only 8 sets of texture coordinates in the legacy named attribute model. With generic attributes you have up to 16 which may be used for any purpose you wish including texture coordinates. The assumption is that you can share some attributes across multiple images and/or generate some of the coordinates in your shaders.

Assume for the sake of argument that we change that code to use generic attributes only, and in fact only pass one set of texcoords (one attribute), But I don’t see how to actually bind that ninth texture from the C side, due to the error posted. Or is there a different call or enumerant we should use to attach each texture to the TMU ?

more details:

We call both of these in succession:


The former generates no error, the latter one does.

You only need glActiveTextureARB(), the other is for array setup.

glActiveTextureARB( GL_TEXTURE0 + 10 ) will work, but with fixed pipeline i think u can only acces 4 texture units for more than that u need glsl or cg

glClientActiveTextureARB( GL_TEXTURE0 + 10 ) will fail ( since theres only 8 sets of different texturecoordinates allowed)

From the OpenGL spec:

The constants obey TEXTUREi = TEXTURE0 + i (i is in the range 0 to
k−1, where k is the implementation-dependent number of texture coordinate sets
MAX_TEXTURE_COORDS is 8 for the GeForce 8800 so you will see an OpenGL error if you specify a value greater than GL_TEXTURE7.

The glClientActiveTexture function only needs to be used when you are using glTexCoordPointer to setup your texture coordinate vertex arrays.

Also, to sample from more than GL_MAX_TEXTURE_UNITS textures, you will need to use shaders (which can read from up to GL_MAX_TEXTURE_IMAGE_UNITS textures).

Read Nvidia's FAQ .