Linking textures to vertex shader


I need to make two different lookup textures, one in the vertex shader and other in the fragment shader. I’m using multitextures to do it. So, I’m linking the textures to the shaders with the following code:

glUniform1iARB(_envMapLoc, 0);
glUniform1iARB(_textureLoc, 1);
glUniform1fARB(_textureVertLoc, 2);

I use the GL_TEXTURE1_ARB in the fragment shader and the GL_TEXTURE2_ARB in the vertex shader.
First doubt is that for the texture lookup to work in the vertex shader, I have to use glUniform1fARB instead of glUniform1iARB. Should it be like this?
Then, if I make a texture lookup only in the vertex shader or only in the fragment shader, it works fine. But if I try to use texture lookups in both vertex and fragment shaders, it doesn’t work! I mean, it compiles and everything but the mesh is not rendered.
For the texture in the vertex shader, I’m using the following:

gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB32F_ARB ,w,h, GL_RGB, GL_FLOAT, pixels);

Did anyone had already this problem, or can you give some hints how to solve it?
I’m using an NVidia 6600GT.

Thank you.

First read this:

It says currently only the GL_LUMINANCE_FLOAT32_ATI (edit: == GL_LUMINANCE_FLOAT32_ARB) and GL_RGBA_FLOAT32_ATI (edit: == GL_RGBA_FLOAT32_ARB) formats are supported for vertex textures. These formats contain a single or four channels of 32-bit floating point data, respectively. Be aware that using other texture formats or unsupported filtering modes may cause the driver to drop back to software vertex processing, with a commensurate drop in interactive performance."

The given code after that is missing the mipmap download or should use GL_NEAREST in the min filter.

I wouldn’t use gluBuild2DMipMaps because that’s running in SW and GLU is stuck on an old implementation which probably doesn’t know anything about floating point internal formats.
Try no mipmaps, build them yourself or use extensions like

Texture samplers must be set with glUniform*i[v] variants. See OpenGL 2.0 spec chapter 2.15 page 81.

The problem is that I’m only able to access textures in vertex shader if I use glUniform1fARB!!! When I use glUniform1iARB, I only get a black texture. On the other hand, to access textures in the fragment shader I use glUniform1iARB. If I try to access different textures on both shaders, nothing is rendered.

Originally posted by Varela:
The problem is that I’m only able to access textures in vertex shader if I use glUniform1fARB!!! When I use glUniform1iARB, I only get a black texture. On the other hand, to access textures in the fragment shader I use glUniform1iARB. If I try to access different textures on both shaders, nothing is rendered.
That doesn’t mean your shader worked.
If you set the texture sampler with glUniform1f, the shader objects extension (or OpenGL 2.0) must throw an error. Check glGetError after the call.
Not having successfully set the sampler, probably means the sampler is pointing to texture unit 0.
If there is a texture there, this is used.
If you have multiple samplers and all point to 0 because of your error above, the shader will not work if the sampler type is not identical. That is if you have a 1D and a 2D lookup on the same unit, the whole shader will fail validation, I think.

Your vertex texture will not run in hardware of you don’t change your texture format from RGB_FLOAT32 to RGBA_FLOAT32.

You can’t use mipmaps or linear filtering in the vertex texture.
You must explicitly define a LOD to use with the texture2DLod function if you downloaded textures.

I guess gluBuild2DMipMaps with internalFormat of new float targets doesn’t work under Windows. The enum GL_RGB32F_ARB you used doesn’t seem to exist.

Use a different color for the glClearColor than black during debugging, to see if you rendered black on black.

I’m pretty sure it’s your fault, because there are working demos like “Simple Vertex Texture” here:

Fix all of the above first, then, if it still doesn’t work, provide all relevant code.

Thank you for your answer Relic.
I’ve done the changes you told me to but the dark texture is still there! :frowning:
Here is the relevant code:

// Create texture to use in the fragment shader

GLuint CreateMipMapLinear(int w, int h, float *pixels)
GLuint texture;
glGenTextures(1, &texture);
glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB ,w,h, GL_RGB, GL_FLOAT, pixels);
return texture;

// Create texture to use in vertex shader

GLuint CreateTextureNearest(int w, int h, float *pixels)
GLuint texture;
glGenTextures(1, &texture);
glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA32F_ARB , w, h, 0, GL_RGBA, GL_FLOAT, pixels);
return texture;

// Linking textures to shaders

_envMapLoc = glGetUniformLocationARB(_program, “EnvMap”);
_textureLoc = glGetUniformLocationARB(_program,“texture”);
_textureVertLoc = glGetUniformLocationARB(_program,“textureVert”);

glUniform1iARB(_envMapLoc, 0);
glUniform1iARB(_textureLoc, 1);
glUniform1iARB(_textureVertLoc, 2);

glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, envMap);

glBindTexture(GL_TEXTURE_2D, texture);

glBindTexture(GL_TEXTURE_2D, textureVert);


uniform sampler2D textureVert;

void main(void)
gl_FrontColor = texture2DLod(textureVert,vec2(0.5,0.5),0.0);
gl_Position = ftransform();


uniform samplerCube EnvMap;
uniform sampler2D texture;

void main (void)
gl_FragColor = gl_Color;


When I create the vertex shader texture I have to use GL_RGBA32F_ARB because it doesn’t recognize RGBA_FLOAT32.

No, you haven’t.

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); // => invalid enum, mag cannot be mipmapped.

OK, my bad, I found GL_RGBA32F_ARB now, it’s the same as GL_RGBA_FLOAT32_ATI. I thought there is an ARB version of the latter.

glEnable(GL_TEXTURE_CUBE_MAP_ARB); // Unnecessary, this is used for texture units, shaders work on texture image units and these are enabled by the shader automatically.
glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, envMap);
glDisable(GL_TEXTURE_CUBE_MAP_ARB); // Ouch, if that would be relevant…, but isn’t.

Where are the glGetError() calls?
You didn’t catch your errors.

So you have a texture lookup at the exact same position per vertex, put that into the color and use that color in the fragment shader.
The expected result is geometry at the position calculated by the current matrix setup due to ftransform() having a single color of the texture at coordinate (0.5, 0.5).

You just have to make sure your texture download succeeded and your floating point values are what you wanted.

Check if you got the correct locations for the texture samplers. You used only one.
EnvMap and texture should return -1 because they are not referenced.

Next please. :wink:

I’ve changed the order of the textures, i.e., GL_TEXTURE0_ARB is now the vertex shader texture, GL_TEXTURE1_ARB is the fragment shader texture and GL_TEXTURE2_ARB is the environment cube texture. The program now is making the look up on the first two textures, but is not recognizing the cube texture. So, it seems it always links the first two textures but cannot read the third one, no matter which texture it is. I think I can use more than two textures!! Is there any parameter where I can confirm this?

Query glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &maxTextureImageUnits). That’s the number of texture units you can use in fragment programs. Should be 16 on your HW.
GL_MAX_TEXTURE_UNITS is the number of the fixed pipeline texture units (which react on glEnable, glDisable, when fragment programs are off) (== 4 on you HW).
GL_MAX_TEXTURE_COORDS is the number of texture attribute arrays (= 8 on your HW).
GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS_ARB is the number of texture units available in the vertex shader (== 4 for you).

Show your cubemap download code and your new shader code and initialization.
Always add glGetError calls for debugging.

OK Relic. I solved the problem (at least I think). There was some confusion in my code when assigning textures to the shaders.
However there is one aspect that I would like to emphasize.
In my code I can use the following to create the vertex shader texture.
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB32F_ARB ,w,h, GL_RGB, GL_FLOAT, pixels);



It works OK for me. Moreover, it creates the mipmap structure to use in the vertex shader, which is necessary to use the instruction texture2DLod. Then to choose the mipmap level I used the billiner filtering with mipmapping described in nVidia whitepaper “Using Vertex Textures”.

Problem solved. Thank you for your help Relic.

Interesting. How fast is it?
Since the docs say only 1 and 4 float component textures are supported and you use RGB, yours should have fallen back to software.
contains a more detailed description of what’s supported. See table 4.16, last column.

gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB32F_ARB ,w,h, GL_RGB, GL_FLOAT, pixels);
This works because the driver probably pads the texture with alpha = 1.0

GL_RGB32F_ARB is just an internal format, which gluBuild2DMipmaps just passes on to glTexImage2D.
gluBuild2DMipmaps only cares about the input format.
If I recall correctly, since GL_FLOAT is used, glu’s algorithm for mipmaping is done with floats.

Of course, I’m assuming glu32.dll is the same as the open source SGI code.

Makes sense, since the supported textures document
says RGB32F becomes RGBA32F internally.
I need to try that out one day. :slight_smile:

I got the always-black-texture-lookup problem in my vertex shader. I am wondering if this problem could relate to the driver version (I use version 77.76). I’ve made a simple program with only one texture, which has to be used in my vertex shader. The texture is defined with

	glGenTextures(1, &textureID);
	glBindTexture(GL_TEXTURE_2D, textureID);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_FLOAT32_ATI, size, size, 0, GL_LUMINANCE, GL_FLOAT, texture);

To test the texture lookup values (which range from 0 to 1), I set set the color as

gl_FrontColor = vec4(1,1,1,1) - texture2D(texture_sampler, tex_coord / 8.0);

where tex_coord is a vec2, with values ranging from 0 to 7 in both coordinates. The resulting color is white, which indicates that the lookup is black. I followed the guidelines given in this thread, as well as the guidelines given by NVidia for doing texture fetches in a vertex shader.

It should be mentioned that I got some troubles with my glGetError() calls. A glEnd() call always yileds the error 0x0502 (=GL_INVALID_OPERATION)… all calls between my glBegin(GL_TRIANGLE_STRIP) and glEnd(), does not report any errors though.

So in short, I guess my first question is which driver version to use?.. and ofc, if anyone can spot a problem in the code given, I would be happy with a comment on that also.


No, you missed two important things.

First I said that the OpenGL example code in
“…is missing the mipmap download or should use GL_NEAREST in the min filter.”

You didn’t download mipmaps, which makes your texture incosistent and therefore switched off the vertex texture unit.

Second, the document says “Level of detail is a measure of how magnified or minified the texture image is on the screen. It is normally calculated based on the rate of change of the texture coordinates from pixel to pixel. Since vertex textures are only accessed at the vertices there is no easy way for the hardware to calculate this value. If you want to use mipmapped textures you will have to calculate the LOD yourself in the vertex program…”

Means for GLSL you must use texture2DLod in the vertex shader when using mipmapped vertex textures.

Your code should work if you set

You are right, the change from GL_NEAREST_MIPMAP_NEARES to GL_NEAREST fixed the problem…

Thanks, you saved me alot of time - I was somehow convinced that the problem was located elsewhere. Now I can get back to my original problem of implementing spherical harmonic lighting in my shader.


This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.