glGetUniformLocation returns -1 with sampler2D

Hey everyone,

I’m trying my hand at using textures in openGL. Currently, I’d like to get a hold of the uniform handle of my sampler2D variable.
The code is:

qDebug() << "glGetError before binding attribute_s_texture is" << glGetError();
    attribute_name = "texture1";
    attribute_s_texture = glGetUniformLocation(program, attribute_name); qDebug() << "glGetError after binding attribute_s_texture is" << glGetError();
    if (attribute_s_texture == -1) {
        fprintf(stderr, "Could not bind attribute %s
", attribute_name);

the glGetError() calls always return 0. Also, the error message that the attribute couldn’t be bound also always shows.

My fragment shader:

varying vec3 f_color;
uniform sampler2D texture1;

void main(void) {
//if(texture1 == 0)
//	gl_FragColor = vec4(f_color.x, f_color.y, f_color.z, 1.0);
	gl_FragColor = texture2D(texture1, gl_TexCoord[0].st);

(I’m trying to write the shader in a way that draws a texture if there is one and doesn’t when there isn’t.)

For sake of completion, my vertex shader:

attribute vec3 coord3d;
attribute vec3 v_color;
attribute vec3 v_normal;
attribute vec3 v_uv;

uniform mat4 mvp;

varying vec3 normals;
varying vec3 f_color;
//varying float intensity;

varying vec3 Id;

void main(void) {

gl_Position = mvp * vec4(coord3d, 1.0);
//vec3 Id = vec3(gl_LightSource[0].position - gl_Position);
Id = vec3(gl_LightSource[0].position - gl_Position);
//vec3 lightDir = normalize(Id);
//intensity = dot(lightDir, gl_Normal);
//normals = gl_NormalMatrix * gl_Normal/*v_normal*/;
//normals = gl_Normal;
normals = v_normal;
gl_TexCoord[0] = gl_MultiTexCoord0;
f_color = v_color;


(It’s … “experimental”.)

Anyone got an idea? Since I’m using the texture1 variable, I don’t think it’s being optimized out, is it?

Are you checking your info logs after you compile your shaders and link? This helps me sort out bugs.

Also glGetUniformLocation returns -1 until the program has been successfully linked.

// After shaders compiled
    GLint logLength;
    GLint status;
    glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &logLength);
    if ( logLength > 0 ) {
        GLchar *log = (GLchar *)malloc(logLength);
        glGetShaderInfoLog(shader, logLength, &logLength, log);
        printf("Shader compile log %s", log );
    glGetShaderiv(shader, GL_COMPILE_STATUS, &status);
    if ( status == 0 ) {
        return 0;
    assert( glIsShader( shader ) );

    GLint status;
    GLint logLength;
    glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);
    if (logLength > 0) {
        GLchar *log = (GLchar *)malloc(logLength);
        glGetProgramInfoLog(prog, logLength, &logLength, log);
        printf( "Program link log: %s", log );
    glGetProgramiv(prog, GL_LINK_STATUS, &status);
    if (status == 0) {
        printf("Link failed");
        return false;

Yeah, I have glGetAttribLocation() calls for other variables (coord3d,…) in the shader right before the glGetUniform(). These work just fine. The shaders link and compile successfully as well, I’m evaluating the log every time.

Not sure what the issue with the uniform location is, but a sampler2D variable is not an integer in disguise even though the application uses glUniform1i to assign a value. Inside the shader the variable has an opaque type that you can not compare to integers, that can not participate in arithmetic (adding two sampler2D variables does not work) or pretty much anything else except being passed to one of the texture access functions.
Even if you use a separate uniform variable to indicate if a texture is available the situation is slightly iffy, because (at least certain generations of) hardware can not really execute conditionals. Instead it always executes both branches and discards the result of the branch not taken - so there is always the attempt to read from the texture even if your variable says you don’t have one. That can lead to trouble if no texture us bound. Aggressive optimization may be able to figure out that the uniform is false and thus it can kill the complete branch, but I believe that is not required by GLSL and relying on optimizations to happen for correctness is a bit of a slippery slope :wink:

Well, just to let anyone who’s interested know - After renaming the variable and rewriting my shader (maybe I wasn’t really using it after all and the use was optimized out anyway) I have no more problems. Don’t really know what that was about, thanks a lot anyway to y’all. :slight_smile: