Texture in GL3.2 only shows up black - help please

I’m attempting a basic textured quad using GL3.2 only.

The quad is drawn - but it only shows up black - no texture is visible. Help please !!

The texture is a TGA file loaded using the GLFW function. The textureID is created and bound prior to loading.

This is run after the shaders are compiled and linked:

  glUniform1i(glGetUniformLocation(shaderProgram[SHADER_NORMAL].programID, "ColorMapSampler"), 0);

Shaders -

#version 150
precision highp float;

uniform sampler2D   ColorMapSampler;

in  vec2    outTexCoord;
in  vec4    outColor;

out vec4    FragColor;

void main(void)
{
    FragColor = texture(ColorMapSampler, outTexCoord.st);
}


#version 150

uniform mat4 projMat;
uniform mat4 modelMat;

in  vec2    inTexture0;
in  vec2    inPosition;

out vec2    outTexCoord;

in  vec3    inColor;
out vec4    outColor;

void main(void)
{
    vec4 pos = modelMat * vec4(inPosition, 0.0, 1.0);

    gl_Position = projMat * pos;

    outTexCoord = inTexture0;
}

Init code for the quad


bool gl_setupVBOFirstUse(int whichTexture, float posX, float posY, float texWidth, float texHeight)
//-----------------------------------------------------------------------------
{

  myQuad[0].x = posX;
  myQuad[0].y = posY;
  myQuad[0].s = 0.0f;
  myQuad[0].t = 1.0f;

  myQuad[1].x = posX + texWidth;
  myQuad[1].y = posY;
  myQuad[1].s = 0.0f;
  myQuad[1].t = 0.0f;

  myQuad[2].x = posX + texWidth;
  myQuad[2].y = posY + texHeight;
  myQuad[2].s = 1.0f;
  myQuad[2].t = 1.0f;

  myQuad[3].x = posX;
  myQuad[3].y = posY + texHeight;
  myQuad[3].s = 1.0f;
  myQuad[3].t = 0.0f;

  GLuint indexBufferObject;

  glGenVertexArrays(1, &myvao[0]);
  glGenBuffers(2, &myvbo[0]);

  glBindVertexArray(myvao[0]);	// first vao is created
  glBindBuffer(GL_ARRAY_BUFFER, myvbo[0]);	// position buffer is created
  glBufferData(GL_ARRAY_BUFFER, sizeof(myVertex) * 4, myQuad, GL_STATIC_DRAW);

  const GLuint positionIdx = glGetAttribLocation(shaderProgram[SHADER_NORMAL].programID, "inPosition");
//
// this binds position buffer to VAOs
  glVertexAttribPointer(positionIdx, 2, GL_FLOAT, GL_FALSE, sizeof(myVertex), BUFFER_OFFSET(0));
  glEnableVertexAttribArray(positionIdx);

  const GLuint textureIdx =  glGetAttribLocation(shaderProgram[SHADER_NORMAL].programID, "inTexture0");
//
// this binds texture positions to VAO
  glVertexAttribPointer(textureIdx, 2, GL_FLOAT, GL_FALSE, sizeof(myVertex), BUFFER_OFFSET(sizeof(GLfloat) * 2));
  glEnableVertexAttribArray(textureIdx);


//  const GLuint colorIdx = glGetAttribLocation(shaderProgram[SHADER_NORMAL].programID, "inColor");

//  glBindBuffer(GL_ARRAY_BUFFER, myvbo[1]);
  float colorData0[] = {1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f};
//  glBufferData(GL_ARRAY_BUFFER, sizeof(colorData0), colorData0, GL_STATIC_DRAW);


// this binds color buffer to VAO
  //glVertexAttribPointer(colorIdx, 3, GL_FLOAT, GL_FALSE, 0, 0);
//  glEnableVertexAttribArray(colorIdx);


  projMatrixLocation = glGetUniformLocation(shaderProgram[SHADER_NORMAL].programID, "projMat");
  modelMatrixLocation = glGetUniformLocation(shaderProgram[SHADER_NORMAL].programID, "modelMat");

  return GL_TRUE;
}

And the drawing function


void updateScreen(float interpolation)
//-----------------------------------------------------------------------------
{
  kmGLMatrixMode(KM_GL_PROJECTION);
  kmGLLoadIdentity();
  kmMat4OrthographicProjection(&projectionMatrix, 0.0f, (float)winWidth, (float)winHeight, 0.0f, 1.0f, 100.0f);

  kmGLMatrixMode(KM_GL_MODELVIEW);
  kmGLLoadIdentity();

  kmGLTranslatef(0.0, 0.0f, -1.0f);

  kmGLGetMatrix(KM_GL_MODELVIEW, &modelview);

    //
    // End matrix generation
    //

  glViewport(0, 0, winWidth, winHeight);
  glClearColor(0.0, 0.0, 0.3, 1.0);
  glClear(GL_COLOR_BUFFER_BIT);

  glUseProgram(shaderProgram[SHADER_NORMAL].programID);

   //
   // Load the matrixes into the vertex shader
  glUniformMatrix4fv(modelMatrixLocation, 1, GL_FALSE, modelview.mat);
  glUniformMatrix4fv(projMatrixLocation, 1, GL_FALSE, projectionMatrix.mat);

  glActiveTexture(GL_TEXTURE0);
  glBindTexture(GL_TEXTURE_2D, textures[0].texID);

  // Set to texture unit bound - not texture ID
glUniform1i(glGetUniformLocation(shaderProgram[SHADER_NORMAL].programID, "ColorMapSampler"), 0);

  glBindVertexArray(myvao[0]);
  glDrawArrays(GL_TRIANGLE_FAN, 0, 4);

// glDrawElements(GL_TRIANGLE_STRIP, sizeof(myIndexes) / sizeof(myIndexes[0]), GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));

  glfwSwapBuffers();        // Swap buffers

  gl_getAllGLErrors("updateScreen");

    glUseProgram(0);
    glBindTexture(GL_TEXTURE_2D, 0);

  // Check if the ESC key was pressed or the window was closed
  quitProgram = glfwGetKey( GLFW_KEY_ESC ) && glfwGetWindowParam( GLFW_OPENED );
}

I do get this error from rendering the screen - but only once - it seems to be related to the way the GLFW window is setup.


10:16:00 > OpenGL Error: Called from [ updateScreen ]
10:16:00 > glError [ GL_INVALID_OPERATION: The specified operation is not allowed in the current state. ]

I don’t see anything suspicious in your code…

  1. you need to know exactly what command caused GL_INVALID_OPERATION.

  2. post your shader initialization code for completeness.

  3. try to output something simplier: a constant color, then the texture corrdinate.

After some more digging, and more debug code - something very strange is happening:

This code assigns the vertex position values to the in vec2 inPosition variable within the shader - this is working as the quad is displayed on the screen. However, the error check after the glBindBuffer call generates an GL_INVALID_OPERATION.

Which apparently is generated if glBindBuffer is executed between the execution of glBegin and the corresponding execution of glEnd.

I’m not calling glBegin anywhere.


  glBindBuffer(GL_ARRAY_BUFFER, myvbo[0]);	// position buffer is created
  gl_getAllGLErrors("After position glBindBuffer");

  GLint positionIdx = glGetAttribLocation(shaderProgram[SHADER_NORMAL].programID, "inPosition");
  gl_getAllGLErrors("After position glGetAttribLocation");
  if (-1 == positionIdx)
  {
    io_logToFile("Invalid attrib - not found [ %s ]", "inPosition");
    return GL_FALSE;
  }

  glBufferData(GL_ARRAY_BUFFER, sizeof(myVertex) * 4, myQuad, GL_STATIC_DRAW);
  gl_getAllGLErrors("After position glBufferData");

  glVertexAttribPointer(positionIdx, 2, GL_FLOAT, GL_FALSE, sizeof(myVertex), BUFFER_OFFSET(0));
  gl_getAllGLErrors("After position glVertexAttribPointer");

  glEnableVertexAttribArray(positionIdx);
  gl_getAllGLErrors("After position glEnableVertexAttribArray");

And this is the weird bit - the same code fails to find the in vec2 inTexture variable declared in the same shader ??


  glBindBuffer(GL_ARRAY_BUFFER, myvbo[1]);
  gl_getAllGLErrors("After texture glBindBuffer");

  GLint textureIdx = glGetAttribLocation(shaderProgram[SHADER_NORMAL].programID, "inTexture");
  gl_getAllGLErrors("After texture glGetAttribLocation");

  if (-1 == textureIdx)
  {
    io_logToFile("Invalid attrib - not found [ %s ]", "inTexture");
    return GL_FALSE;
  }
  glVertexAttribPointer(textureIdx, 2, GL_FLOAT, GL_FALSE, sizeof(myVertex), BUFFER_OFFSET(sizeof(GLfloat) * 2));
  gl_getAllGLErrors("After texture glVertexAttribPointer");

  glEnableVertexAttribArray(textureIdx);
  gl_getAllGLErrors("After texture glEnableVertexAttribArray");

But then after the inTexture call - it does find the next in vec3 inColor variable


  glBindBuffer(GL_ARRAY_BUFFER, myvbo[2]);
  GLint colorIdx = glGetAttribLocation(shaderProgram[SHADER_NORMAL].programID, "inColor");
  if (-1 == colorIdx)
  {
    io_logToFile("Invalid attrib - not found [ %s ]", "inColor");
    return GL_FALSE;
  }

  glBufferData(GL_ARRAY_BUFFER, sizeof(colorData0), colorData0, GL_STATIC_DRAW);
  glVertexAttribPointer(colorIdx, 3, GL_FLOAT, GL_FALSE, sizeof(GL_FLOAT), BUFFER_OFFSET(0));
  glEnableVertexAttribArray(colorIdx);

Heres the vert shader - as you can see - the variables either side of inTexture are found - but not inTexture.


#version 150
precision highp float;

uniform mat4 projMat;
uniform mat4 modelMat;

in  vec2 inPosition;
in  vec2 inTexture;
in  vec3 inColor;

out vec2    outTexCoord;
out vec4    outColor;

void main(void)
{
    vec4 pos = modelMat * vec4(inPosition, 0.0, 1.0);

    gl_Position = projMat * pos;

    outTexCoord = inTexture;
    outColor = vec4(inColor, 1.0);

}

I can now see a nicely coloured quad on the screen if I ignore the failure to find ‘inTexture’.

  1. It doesn’t matter why inTexture is not bound correctly as long as you have GL_INVALID_OPERATION on BindBuffer.

  2. I asked you to post shader initialization code. AFAIK, the values you are trying to obtain by glGetAttribLocation are the values you had to assign before linking the shader (so, basically, it’s weird to ask GL about them).

  3. In the first code sample you called BindBuffer only once. In the last post you called it several times. When exactly the error happens?

AFAIK, the values you are trying to obtain by glGetAttribLocation are the values you had to assign before linking the shader

That’s not true. GLSL will automatically assign attribute locations for you, if you don’t assign them.

Here’s the new code with the GL_INVALID_OPERATION fixed.

From looking at other tutorials - they only appear to bind the data buffer once, and then assign the position and texture from that one bound data set - it’s interleaved PosX, PosY, TexS, TexT for each vertex.


  glBindVertexArray(myvao[0]);	// first vao is created

  glBindBuffer(GL_ARRAY_BUFFER, myvbo[0]);	// position buffer is created
  gl_getAllGLErrors("After position glBindBuffer");

GLint positionIdx = glGetAttribLocation(shaderProgram[SHADER_NORMAL].programID, "inPosition");
  gl_getAllGLErrors("After position glGetAttribLocation");
  if (-1 == positionIdx)
  {
    con_print(GL_TRUE, "Invalid attrib - not found [ %s ]", "inPosition");
    return GL_FALSE;
  }

  glBufferData(GL_ARRAY_BUFFER, sizeof(myVertex) * 4, myQuad, GL_STATIC_DRAW);
  gl_getAllGLErrors("After position glBufferData");

  glVertexAttribPointer(positionIdx, 2, GL_FLOAT, GL_FALSE, sizeof(myVertex), BUFFER_OFFSET(0));
  gl_getAllGLErrors("After position glVertexAttribPointer");

  glEnableVertexAttribArray(positionIdx);
  gl_getAllGLErrors("After position glEnableVertexAttribArray");

//
// Texture coordinates

  GLint textureIdx = glGetAttribLocation(shaderProgram[SHADER_NORMAL].programID, "inTexture");
  gl_getAllGLErrors("After texture glGetAttribLocation");

  if (-1 == textureIdx)
  {
    con_print(GL_TRUE, "Invalid attrib - not found [ %s ]", "inTexture");
//    return GL_FALSE;
  }

  glVertexAttribPointer(textureIdx, 2, GL_FLOAT, GL_FALSE, sizeof(myVertex), BUFFER_OFFSET(sizeof(GLfloat) * 2));
  gl_getAllGLErrors("After texture glVertexAttribPointer");

  glEnableVertexAttribArray(textureIdx);
  gl_getAllGLErrors("After texture glEnableVertexAttribArray");

From my understanding of the man pages - glGetAttribLocation is called after the shader program has been compiled and linked

glGetAttribLocation queries the previously linked program object specified by program for the attribute variable specified by name and returns the index of the generic vertex attribute that is bound to that attribute variable

Here’s the shader init code:


/-----------------------------------------------------------------------------
//
// Load and compile shaders
bool gl_loadCompileShaders(int programObject)
//-----------------------------------------------------------------------------
{
    GLint   compiled;
    GLint   linked;
    GLint   *vertFile = NULL;
    GLint   vertFileLength;

    GLint  *fragFile = NULL;
    GLint   fragFileLength;

    GLuint       vertexShaderObject, fragmentShaderObject;

    vertFileLength = io_GetFileSize((char *)shaderProgram[programObject].vertFileName);
	if (-1 == vertFileLength)
	return false;

    vertFile = (GLint *)malloc(sizeof(char) * vertFileLength);
    if (NULL == vertFile)
        return false;

    if (-1 == io_GetTextFile((char *)shaderProgram[programObject].vertFileName, (char *)vertFile))
	return false;

    fragFileLength = io_GetFileSize((char *)shaderProgram[programObject].fragFileName);
    if (-1 == fragFileLength)
        return false;

    fragFile = (GLint *)malloc(sizeof(char) * fragFileLength);
    if (NULL == fragFile)
        return false;

    if (-1 == io_GetTextFile((char *)shaderProgram[programObject].fragFileName, (char *)fragFile))
	return false;

    vertexShaderObject = glCreateShader(GL_VERTEX_SHADER);
    fragmentShaderObject = glCreateShader(GL_FRAGMENT_SHADER);

    if ((0 == vertexShaderObject) || (0 == fragmentShaderObject))
        {
        gl_getAllGLErrors((char *)"gl_loadCompileShaders");
        return false;
        }

	// Do a quick switch so we can do a double pointer below
	const char *szVShader = (char *)vertFile;
    // Attach the shader to the shaderObject
    glShaderSource(vertexShaderObject, 1, (const GLchar **)&szVShader, &vertFileLength);

	// Do a quick switch so we can do a double pointer below
	const char *szFShader = (char *)fragFile;
    // Attach the shader to the shaderObject
    glShaderSource(fragmentShaderObject, 1, (const GLchar **)&szFShader, &fragFileLength);

    // now compile the shader sourcde
    glCompileShader(vertexShaderObject);
    //
    // Check it compiled ok
    glGetShaderiv(vertexShaderObject, GL_COMPILE_STATUS, &compiled);
    if (GL_TRUE == compiled)
        printf("Vertex shader compiled ok.");
    else
    {
        printf("Vertex shader compile failed [ %s]", shaderProgram[programObject].vertFileName);
        gl_getGLSLError(vertexShaderObject, GLSL_SHADER);
        return false;
    }

    glCompileShader(fragmentShaderObject);
    glGetShaderiv(fragmentShaderObject, GL_COMPILE_STATUS, &compiled);
    if (GL_TRUE == compiled)
        printf("Fragment shader compiled ok.");
    else
    {
        printf("Fragment shader compile failed [ %s ]", shaderProgram[programObject].fragFileName);
        gl_getGLSLError(fragmentShaderObject, GLSL_SHADER);
        return false;
    }

    shaderProgram[programObject].programID = glCreateProgram();

    glAttachShader(shaderProgram[programObject].programID, vertexShaderObject);
    glAttachShader(shaderProgram[programObject].programID, fragmentShaderObject);

    glLinkProgram(shaderProgram[programObject].programID);

    glGetProgramiv(shaderProgram[programObject].programID, GL_LINK_STATUS, &linked);
    if (GL_TRUE == linked)
    {
        printf("Shaders linked ok.
");
        shaderProgram[programObject].linkedOK = true;
    }
    else
    {
        printf("Shaders failed to link.
");
        gl_getGLSLError(vertexShaderObject, GLSL_PROGRAM);
        return false;
    }
    return true;
}

This is the fragment shader - if I comment out the FragColor = texture2D line - and enable the color line - it displays a colored quad - but complains about not finding ‘inTexture’.

Running it as it is results in a black quad being drawn.


16:02:14 > Invalid attrib - not found [ inTexture ]
16:02:14 > OpenGL Error: Called from [ After texture glVertexAttribPointer ]
16:02:14 > glError [ GL_INVALID_VALUE: A numeric argument is out of range. ]
16:02:14 > OpenGL Error: Called from [ After texture glEnableVertexAttribArray ]
16:02:14 > glError [ GL_INVALID_VALUE: A numeric argument is out of range. ]
16:02:21 > Logfile: Log file closed:


#version 150
// It was expressed that some drivers required this next line to function properly
precision highp float;

uniform sampler2D   Texture0;

in  vec2    outTexCoord;

out vec4    FragColor;

void main(void)
{
    FragColor = texture2D(Texture0, outTexCoord);
//  FragColor = vec4(0.4, 0.5, 1.0, 0.5);
}

Thanks for taking the time to answer.

Cheers

I think maybe I may have found why my texture is all black.

Calling this: glEnable(GL_TEXTURE_2D);

I get this error:

glError [ GL_INVALID_ENUM: An unacceptable value is specified for an enumerated argument. ]

But from what I can work out that is still allowed in GL3.2 - “GL_TEXTURE_2D” is still defined within gl3.h.

The call:
glEnable(GL_TEXTURE_2D);

Is only used in fixed function code (ie legacy OpenGL)

  1. I asked you to post shader initialization code. AFAIK, the values you are trying to obtain by glGetAttribLocation are the values you had to assign before linking the shader (so, basically, it’s weird to ask GL about them).

From my understanding of the man pages - glGetAttribLocation is called after the shader program has been compiled and linked

[quote]
glGetAttribLocation queries the previously linked program object specified by program for the attribute variable specified by name and returns the index of the generic vertex attribute that is bound to that attribute variable

[/QUOTE]

I was talking about glBindAttribLocation. It should be clear if you read carefully.
So, as your glGetAttribLocation can’t find the attrib by name, I suggest to try bind attrib location manually (like I do), using this glBindAttribLocation before program linking.

Your code seems to be correct to me…

Solved !!!

Right


     // Set texture parameters
      glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
      glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
      glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
      glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );

      glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, textures[i].width, textures[i].height, 0, GL_RGB, GL_UNSIGNED_BYTE, textures[i].imageData);

:mad: Wrong


      glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, textures[i].width, textures[i].height, 0, GL_RGB, GL_UNSIGNED_BYTE, textures[i].imageData);

     // Set texture parameters
      glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
      glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
      glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
      glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );



Cool!

But I didn’t think there is a difference in code execution.
AFAIK, glTexParameteri sets parameters of currently bound texture. These params don’t depend on the texture image, so glTexImage2D should be able to be called at any time (before of after).

I’ll head to the spec to understand this…

Wasn’t this a bug from ATi drivers from long ago?
I guess it’s got to be somewhere in the specs, as ATi try to stick to them; and these 3.2 drivers probably are nVidia’s. (which have become also quite strict)

Indeed they are - nVidia Beta Linux drivers - 190.32

Yes IMHO, it is a driver bug, glTexParameter and glTexImage2D set two different object states. The first one affect the texture object state and the second one the texture image state IIRC. So as you said calling glTexImage2D before or after should be correct (and I already did it successfully).

nevermind