glGetUniformLocation returns -1? Texture in shader

Okay so… my apologies for the multiple threads lately… I’m not much of a graphics programmer at all and I’ve been thrust into the position of being one lately after someone quit and I’ve been handed several years worth of buggy, undocumented, uncommented code, and been told to figure it out and make it work. So… believe me when I say any SCRAPS of wisdom you guys can offer me around here is EXTREMELY appreciated. :slight_smile:

I’m trying to build a small standalone project to test a few ideas out and having a little difficulty… I’m getting a lot of -1s for glGetUniformLocation calls that as far as I know should work fine…

Here’s my EXTREMELY simplified fragment shader code:

uniform samplerRect Something;
uniform vec3 testtest;
uniform vec4 blah;

void main(void) {

	gl_FragColor = gl_Color;


And the relevant parts of the CPU side:

    *  Render Initialize Shaders
void Render::InitializeShaders() {

	// Create the vertex and fragment shader
	m_ShaderProgram = glCreateProgram();

	m_VertexShader = glCreateShader( GL_VERTEX_SHADER );
	m_FragmentShader = glCreateShader( GL_FRAGMENT_SHADER );

    // create the vert and fragment shader pointers
	string vertShaderText;
	string fragShaderText;

    // read the vertex and fragment shader files in
	ReadShaderFile(vertShaderText, "vertexshader.vert");
    ReadShaderFile(fragShaderText, "fragshader.frag");

    // set up some const char pointers to pass to ShaderSource
    const char* vsSource = vertShaderText.c_str();
    const char* fsSource = fragShaderText.c_str();

    // specify the shader source for the shaders
    glShaderSource( m_VertexShader, 1, &vsSource, NULL );
    glShaderSource( m_FragmentShader, 1, &fsSource, NULL );

    // compile the shaders
    glCompileShader( m_VertexShader );
    glCompileShader( m_FragmentShader );

    // check for Errors
    checkCompileErrors( m_VertexShader, "Vertex" );
    checkCompileErrors( m_FragmentShader, "Fragment" );

    // attach the shaders
    glAttachShader( m_ShaderProgram, m_VertexShader );
    glAttachShader( m_ShaderProgram, m_FragmentShader );

    // link & use the shaders
    glLinkProgram( m_ShaderProgram );
    glUseProgram( m_ShaderProgram );


  *  Render constructor
Render::Render() {


    GLint i = glGetUniformLocation(m_ShaderProgram, "Something");
    GLint j = glGetUniformLocation(m_ShaderProgram, "testtest");
    GLint z = glGetUniformLocation(m_ShaderProgram, "blah");

When I run this, I get:
i = 0, j = -1, k = -1.
Any suggestions as to why?

Second question - I’m EXTREMELY confused about interfacing textures and the shader… I’ve been tasked with trying to implement using a texture as a lookup table in the shader code. I’ve looked at a lot of shader tutorials, but I’m still a little confused as to how I’d go about this.

The guy I inherited this project from was trying something similar. The best I could gather was that he was using a samplerRect to reference inside it to get values… I’m just really confused about how you pass a texture into the shader or whatnot though. I understand how you set vector / float / etc values, but textures confuse me a bit and I’m not sure how to make the shader know what texture to read from.

The main thing is I don’t want to actually APPLY the texture to any object. Just… generate a row/column table and turn it into a texture to be passed into the shader for quick lookup. I’m… a bit confused as to how to go about it though. :stuck_out_tongue: As I said above… any even small SCRAPS of help are AWESOME. :slight_smile:

Basically, you need to bind the texture to the correct texture unit and load proper values into the sampler* variables (0). The opengl wiki has an example, search for “glsl sampler” and you get some nice infos. It’s not that hard.

I’ve been tasked with trying to implement using a texture as a lookup table in the shader code. I’ve looked at a lot of shader tutorials, but I’m still a little confused as to how I’d go about this.

How big of a table?

If it is only 2-4 entries, then you can just use 2-4 samplers and use a bunch of if/else statements to select which one to use.

However, if you need more textures, then the best way to do this is with an array texture. The third coordinate is which 2D image in the array to pull from.

I’m not sure how to make the shader know what texture to read from.

Yeah, this is one of the annoying parts of the GL API. One I really wish they’d find a way to correct.

OK, so you have uniforms in your shader. These uniforms have a type. Some of these uniforms have a Sampler type. These uniforms are samplers, which can be used to access a texture. These are in effect texture binding points.

However, you can’t attach a texture object to a program. You need to use the GL context as an intermediate.

Instead of setting which texture a particular sampler uniform uses, you set which texture image unit a sampler uniform uses. Texture image units are indices that range from 0 to 31.

Once you have set this, and bound the program to the context, you now must bind the texture(s) that you want to use with that program to those indices.

So, let’s say you have a program that has these 3 textures:

uniform sampler2D baseTexture;
uniform sampler1D fresnelFactor;
uniform sampler2D normalMap;

You get the uniform location for each of these 3 samplers with glGetUniformLocation. Then, bind the program to the context, and set into each uniform the index you want them to look for their texture in:

glUniform1i(glGetUniformLocation(prog, "baseTexture"), 0);
glUniform1i(glGetUniformLocation(prog, "fresnelFactor"), 2);
glUniform1i(glGetUniformLocation(prog, "normalMap"), 3);

This means that “baseTexture” will pull from whatever is bound to texture image unit 0, “fresnelFactor” will pull from whatever is bound to texture image unit 2, and “normalMap” will pull from whatever is bound to texture image unit 3.

Now, all you need to do is bind those texture to those units:

glBindTexture(GL_TEXTURE_2D, baseTextureName);
glBindTexture(GL_TEXTURE_1D, fresnelLookupTableName);
glBindTexture(GL_TEXTURE_2D, normalMapName);

And you should be set for rendering, as far as textures are concerned.

I’m… starting to follow. I can’t get past glActiveTexture though… null pointer every time… I’ve glEnabled GL_TEXTURE_2D… textures are a major blind spot for me and I’m a bit confused as to what I’m missing / not enabling?

In the simple shader you posted, the two “missing” uniforms aren’t actually used. The shader compiler is free to eliminate them as an optimization. See the documentation for glGetActiveUniform().

Just in case, check if glActiveTexture has been loaded:

    msgbox("it's null!");

It’s not… I looked around and it has something to do with the OpenGL version…? All this GL version, extension, glew, and utilities stuff gets really gets confusing… :stuck_out_tongue:

What can I say? Keep on pushing: if not on windows, then on linux.

just download glew, include its header, then call glewInit at some point in your render thread. Don’t worry about the “checking for extensions” bit, just assume you’ve got basic things like glActiveTexture. The worst that will happen is an unhandled exception followed by program termination…which is what you’d do anyway, seeing as though the card must be 12 years old.

glGetUniformLocation() == -1 implies that the uniform you’re asking about isn’t even referenced by the resulting shader, so there’s no point in you messing with it later (e.g. giving it a value, etc.). So you don’t get a valid location for it – because it’s been compiled right out of your shader!

Now you might say: “hey, wait! I see it right there in my shader source code!”. Yes, you do, but think about it from the perspective of the compiler that compiles your shader down to a set of instructions to execute. Are there any instructions that actually “do” anything with “testtest” or “blah”? Nope. So these uniforms are not used. In GLSL parlance, these are not “active” uniforms". They’re useless. Active uniforms are the ones that are actually referenced by the code. None of your above uniforms are active, because none are referenced by the shader code.

Bah. I had called glewInit but it was a bit after that place… All of these extensions and all of this are a mess. I can’t keep them straight. glu glew glaux glext… confusion level = high. O_o

I changed it up so that I’m assigning vectors using them and I’m still getting -1…?

    GLint i = glGetUniformLocation(m_ShaderProgram, "Something");
    GLint j = glGetUniformLocation(m_ShaderProgram, "testtest");
    GLint z = glGetUniformLocation(m_ShaderProgram, "blah");

Fragment Shader

uniform samplerRect Something;
uniform vec3 testtest;
uniform vec4 blah;

void main(void) {
	vec2 textureCoordinates = vec2( 0, 0 );
	vec4 test = textureRect( Something, textureCoordinates );
	float test2 = test.r;
	vec4 wtf = blah;
	vec3 test3 = testtest;
	gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );

I get 0 for Something and -1 for the others…

All of these extensions and all of this are a mess. I can’t keep them straight. glu glew glaux glext…

There’s nothing to be confused about.

GLEW is an extension loading library. You call the loading function, include GLEW’s header, and you have all of OpenGL. GLU, GLAUX and GLEXT have nothing to do with this.

GLAUX isn’t something anyone should be touching. GLU isn’t something someone writing shader code should use. And “GLEXT” isn’t even a thing; it’s a header file that defines all of the extension OpenGL entrypoints and defines. Which GLEW does for you.

I changed it up so that I’m assigning vectors using them and I’m still getting -1…?

GLSL compilers are many things. Stupid is not one of them. Except maybe ATI’s compiler. And even that isn’t that stupid.

Any halfway decent compiler of any language can plainly see that the only line of code that matters is “gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);” Therefore, any halfway decent GLSL compiler will discard the unneeded uniforms.

In short, your code needs to understand that if a uniform location is -1, then you shouldn’t try to set it. Your code needs to be aware of this possibility and deal with it.

mmmm… I see… I didn’t realize that’s what was meant when someone said it discarded objects that weren’t “active.” I presumed if it got referenced it would be considered “active.” I think I’m starting to see now though.

Sigh… well Alfonse… I thought I understood what you were saying with the binding with the textures… and I’m successfully getting the shader to work / get the uniforms and everything as far as I can tell… but I can’t get values from my texture at all, everything is coming back as 0 when I try to pull from the sampler in the shader… as a test I’ve created a really simple RGBA / float texture and loaded every value up with 1 and still get 0… do you mind looking at this and telling me what the really obvious and probably stupid mistake I’m making is?

(My apologies for my newbishness again, like I’ve said, I’ve never really messed with GL/GLSL much before and a huge buggy half completed undocumented project has fallen into my lap and all of this is overwhelming at times… -_-:wink:

Creating the texture…

TextureHandler::TextureHandler() {

    m_Target = GL_TEXTURE_2D;
    m_InternalFormat = GL_RGBA;
    m_Format = GL_RGBA;
    m_Type = GL_FLOAT;


MaterialTextureHandler::MaterialTextureHandler() {


		m_Width = 64;
		m_Height = 64;

void MaterialTextureHandler::InitializeMaterials() {

	// if hardvavlues (fixed values), just set everything
	// to a value of 1 so we can check it in the shader...
	for ( int i = 0; i < m_Width; i++ ) { // for each ID
		for ( int j = 0; j < m_Height; j++ ) {
			m_Data[i][j][0] = 1.0; //( i * 10 ) + j;
			m_Data[i][j][1] = 1.0;
			m_Data[i][j][2] = 1.0;
			m_Data[i][j][3] = 1.0;


void MaterialTextureHandler::GenAndBindTexture() {

	glGenTextures( 1, &m_id );			// generate a texture
    glActiveTexture(GL_TEXTURE0 + 0);
	glBindTexture( m_Target, m_id );	// bind it to the target type

	glTexImage2D( m_Target,
			m_Data );


void Render::Initialize() {

    //  enable GL stuff as necessary..
    glEnable( GL_TEXTURE_2D );

    // set the projection mode
    glMatrixMode( GL_PROJECTION );
    gluPerspective( 60, 1, .1, 10000 );

    // modelview
    glMatrixMode( GL_MODELVIEW );

    // Create the material stuff....


Linking the shader to the texture…

    GLint m_MaterialInformationTextureLocation = glGetUniformLocation(m_ShaderProgram, "MaterialInformationTexture");
    glUniform1i(m_MaterialInformationTextureLocation, 0);

This gives m_MaterialInformationTextureLocation = 0

Then the shader code…

uniform samplerRect MaterialInformationTexture;

void main(void) {
	vec2 textureCoordinates = vec2( 0, 0 );
	vec4 test = textureRect( MaterialInformationTexture, textureCoordinates );
	float test2 = test.r;
	if(test2 > 0.0)
		gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );
		gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );

I know that’s probably a lot to look at… but I’m really frustrated because this always just turns up nothing but red and I really don’t know what’s going on… =/ I’ve tried all sorts of different coords and nothing is giving…

EDIT: I’ve also tried changing samplerRect to sampler2D, and textureRect to texture2D… this compiles and runs as well but I’m still getting ever position = 0…

few things,

you should edit your post and change the samplerRect/textureRect to sampler2D/texture2D. I know you’ve mentioned that at the end, but you should change it in the post so we know the current state of your code at a glance.

your sampling in the shader should give you texel(1.0)/255.0 = 0.00392. As a consequence of the way OpenGL normalises numbers based on the texture internal format. I get the feeling you’re expecting it to give you the 1.0 you put in it.

also, make sure your glUniform1i call is made when the shader is bound to the context (i.e. after the glUseShader() or whatever call you’re using to enable the shader). Otherwise the uniforms value won’t be set and the sampler will not work in the shader.

oh, like columbo says, just one more thing…
seeing as though your texture setup code is using texture_2d, you should put these lines after you create it (after the glTexImage2D call)…


OpenGL defaults the filters for texture_2d’s to mipmap filters, but you’re not specifying a mipmap stack, so the texture is invalid for its filter state. Changing the filters to non-mipmap filters should do the trick.

Okay, definitly on the right track, I think I’m just having trouble understanding how to write / access the texture, because it’s not working out like I expected. Well, sort of.

I changed it up so I was writing different values for R and zeros for GBA:

	for ( int i = 0; i < m_Width; i++ ) {
		for ( int j = 0; j < m_Height; j++ ) {
			m_Data[i][j][0] = ( i * 10 ) + j;
			m_Data[i][j][1] = 0.0;
			m_Data[i][j][2] = 0.0;
			m_Data[i][j][3] = 0.0;


  • and then trying to sample different locations in the texture to get those values… it wasn’t working out so well so I changed it to just looking for > 0.0 at different locations…

uniform sampler2D MaterialInformationTexture;

void main(void) {
	vec2 textureCoordinates = vec2( 0.0, 0.0 ); 
	vec4 test = texture2D( MaterialInformationTexture, textureCoordinates);
	float test2 = test.r;
	if(test2 > 0.0)
		gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );
	else if (test2 == 0.0)
		gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
		gl_FragColor = vec4( 1.0, 1.0, 1.0, 1.0 );

And I was just getting red everywhere… so I tried a few other coords, found out if I went halfway (.5) or some other partial value (.3)… I got green…

	vec2 textureCoordinates = vec2( 2.0, 10.0 );  -- everything red
	vec2 textureCoordinates = vec2( 2.5, 10.0 ); -- everything green
	vec2 textureCoordinates = vec2( 1.5, 10.0 ); - green
	vec2 textureCoordinates = vec2( 1.0, 10.0 ); - red
	vec2 textureCoordinates = vec2( 1.3, 10.0 ); - green

I was a little confused about what you said about the normalizing and expecting to get the 1 I put in there, so I looked around on the wiki and I didn’t think GL_RGBA was normalized… but I tried changing the internalFormat from GL_RGBA to GL_RGBA32F which according to the wiki isn’t normalized… got the same results.

	//After changing to m_InternalFormat to GL_RGBA32F:
	vec2 textureCoordinates = vec2( 0.0, 0.0 ); - red	
	vec2 textureCoordinates = vec2( 1.5, 1.0 ); - green
	vec2 textureCoordinates = vec2( 1.3, 1.0 ); - green
	vec2 textureCoordinates = vec2( 1.0, 10.0 ); - red

Ugh. So confused. :frowning: I seriously appreciate all of the help you guys… I’d never have even made it this far without you guys’ assistance. :slight_smile:

(By the way… what do those two TexParam lines do exactly… because I’ve looked up the explanation for both MIN_FILTER and MAG_FILTER and I still don’t completely understand… and I really am trying to do more than just copy code from the forums and paste it into my IDE, I promise :slight_smile: … it’s just that OpenGL / GLSL is a totally foreign world to me right now and I’ve just been tossed into it. lol)

I looked around on the wiki and I didn’t think GL_RGBA was normalized…

You must not have looked very hard. I wrote the page on Image Formats, and it is very clear on this point:

Emphasis added. So GL_RGBA8 is a 4-component, 8 bits-per-component, unsigned normalized integer format.

As for your texturing problem, it’s simple: your texture coordinates are not what you think they are.

Texture coordinates are normalized. This allows the texturing system to not be dependent on the particular resolution of the texture that it’s accessing. The texture coordinates (0.5, 0.5) is the center of a texture, no matter what its size.

Odds are that your texture wrapping mode is set to be repeat. Therefore, your texture coordinate values outside of the [0, 1) range will have their non-decimal values chopped off. A texture coordinate of (4.3, 208.4) when you’re using repeat texture wrapping is no different from (0.3, 0.4).

If you actually want to use size-based texture coordinates, you have to either pass the texture’s size to the shader, or use GLSL 1.30’s texelFetch command.

it does you great credit that you want to understand the code.
OpenGL texture objects have certain state attached to them, one of which is the minify/magnify filters to use. Imagine drawing a quad with a texture mapped onto it - the magnify filter is what the sampler should do when the quad is drawn bigger than the size of the texture mapped onto it, while the minify filter is what the sampler should do when the quad is drawn smaller than the size of the texture mapped onto it.
The minify filter can use mipmaps, where it samples from pre-shrunk versions of the texture to better approximate what it looks like shrunk down. Look up mipmaps on wikipedia or something. Anyway, by default an OpenGL texture object has the minify filter set to mipmap…but in order for this to work your texture object needs to have a complete set of mipmaps specified. You specify mipmaps by repeatedly calling glTexImage2D with smaller and smaller versions of your texture, and changing the ‘level’ parameter for each version. There’s a few ways of automating this - one is using the outdated gluBuild2DMipmaps function, another is to use the glGenerateMipmap function (you have to use something like glew to get this function, and check it’s supported on your card).
These filter states have been removed in direct3d 10, and instead the filter method is chosen in the shader itself, using sampler configuration. But don’t concern yourself with that.

OHHHHH. Okay, that does make sense… aaaaaand with it switched over to RGBA32F it looks like I’ve got it working right now! :slight_smile:

Yeah I definitely want to understand. I mean… I’ve just been dropped into this project at work and I’ve never really done graphics work before, not even so much as a graphics course. I’ve gotten through most of the red book and hopefully will understand it better the second time through, but I’ve had to start on this stuff before I was even through the first few chapters because the project needed to move and they’ve made it clear they’re not hiring a graphics specialist. I’m basically the only coder on our team at the moment… so they see it like “you work with curly braces and semicolons… you can just pick all this up and run with it right?”… and thus far OpenGL blows me away -_-; I know it seems like I’m not “looking very hard” sometimes but half the time I’m not even sure what I’m looking FOR. =/