Omnidirectional Shadow Cube Texture Sample Breaking

Hello all,

I was working with omnidirectional shadowmapping using cube maps in order to make point light shadows. Unfortunately they are not working. They’re actually breaking my entire drawings. I create and bind the cube map successfully, draw a depth map to it, then I try to sample it in my shaders. My cube map does draw successfully (I checked with apitrace, the depth maps do draw correctly).

I bind it to the renderer and it seems to have binded to the shader fine, but I call textureCube and it returns a weird float value that is completely incapable of returning true, false, or otherwise. I think it’s an error, but I don’t know why.

Even if the samplerCube isn’t set correctly though, textureCube would then return 0.0 I would think. So why does it return an error?

P.S. I tried to check NaN and Inf, it is neither.

Weird in what way? Outside of the 0…1 range?

What is that supposed to mean?

Are you using a samplerCube and performing the comparison yourself, or using samplerCubeShadow? The latter is not valid to textureCube (which in later GLSL versions is only available in the compatibility profile; the core profile uses overloads of texture() rather than different functions for different texture types).

Sorry I was a bit frustrated posting the original. I’m using samplerCube and its not returning seemingly any float value at all on textureCube. Like returning an error (which would usually happen at compile), which I thought was impossible.
I’m using glsl 120 purposely on an intel chip and ubuntu drivers.
Its supposed to return 1…0 on a depth cube map bound to the shader and if I do anything with the returned value from textureCube it blacks out every fragment, which is weird because its additive lighting, and if it were negative it would make everything slightly darker instead of making it completely black.
I was just wondering if it was a common problem, and asking for what I might be missing in my fragment shader.

GLSL 1.2 corresponds to OpenGL 2.1, which doesn’t support depth-component cube maps (§3.8.1):

Textures with a base internal format of DEPTH_COMPONENT are supported by texture image specification commands only if target is TEXTURE_1D, TEXTURE_2D, PROXY_TEXTURE_1D or PROXY_TEXTURE_2D. Using this format in conjunction with any other target will result in an INVALID_OPERATION error.

If you’re using a later version of OpenGL than 2.1, then I’d suggest changing the shaders to use the version of GLSL which corresponds to the OpenGL version.

So in short, I should NOT be using depth component cube maps, but instead I can draw it to the color red to get the same effect using the color attachments? But checking APITrace I can clearly see that the cube map is being drawn, is it just not possible to sample it?

According to the OpenGL 2.1 specification, the glTexImage2D() call should fail if you try to create a depth-format cube map. If the call succeeds, then the implementation would appear to offer some functionality over and above OpenGL 2.1, but it’s anyone’s guess as to what will work and what won’t. Have you checked whether it supports a GLSL version newer than 1.2?

The problems with simply using the red channel are

  1. You can’t use it as a depth buffer for rendering; you’d need to render each face using a depth-format 2D texture as the depth buffer then copy the pixel data into a face of a colour-format cube map.
  2. You’re limited to 16 bits (OpenGL 2.1 doesn’t have any colour formats with more than 16 bits per component).

Which version of opengl allows depth cube map textures? I updated all my shaders to glsl 330 and it didn’t seem to do anything to fix this particular problem.


Then maybe the issue lies elsewhere.

Are you calling glGetError() to check that no errors have occurred? If you’re having problems, this should be called at least after initialisation and at the end of the drawing code.

Have you tried reading the depth data (with glReadPixels or glGetTexImage) to check that rendering is actually working?

I ran glCheckError() in a loop after everything related to drawing, and found that the error is GL_INVALID_OPERATION, given after glDrawElements.

It works when the shadowCube basically returns 0, throwing no error, but when I enable it the shadow cube returns to breaking the pixel.

I ran glReadPixels on the center of the depth cube map, which returns 0.75, which would be 1.0 if it weren’t drawing everything. So I’m pretty sure the depth cube is valid. For some reason it just won’t sample it without breaking the float that it returns to.

I feel it’s highly unlikely that this is just a driver error. I’m not using the geometry shader for the depth mapping, but the second reason for glDrawElements returning invalid operation honestly confuses me a lot: GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to an enabled array or the element array and the buffer object’s data store is currently mapped.

From what I can glean from that horrendous definition, it has to do with VAOs, but my VAOs work fine as long as depth cubes don’t get used in the shader at all.

P.S. I’m also assuming that if the samplerCube has no texture attached to it, that it will just return black always…?

That will occur if the program itself is invalid (did you check glGetProgram(GL_LINK_STATUS)?), or if some aspect of the state used by the program is invalid. In the specification, the details aren’t listed in the sections related to the drawing functions (glDrawElements() etc) but those related to the functions which manipulate the state.

It refers to the case where a draw call tries to read attribute data or indices from a buffer which is currently mapped (glMapBuffer() etc).

P.S. I’m also assuming that if the samplerCube has no texture attached to it, that it will just return black always…?[/QUOTE]
A sampler whose associated texture is incomplete (§8.17 in the 4.5 specification) returns 0 for a shadow sampler and (0,0,0,1) for anything else.

So if the texture I uploaded was invalid, it should return 0, which in my case would not cause the final fragment color to be completely black, it’s just cancel out the diffuse lighting I have set up.

The shaders compile and link just fine seemingly, at least at compile time. I didn’t think I needed to check it after. I’m not doing anything that would use glMapBuffer().

I read somewhere that if I have it in conditionals it might roll it out of the if statements, and completely mess up which sampler it uses. But even then it would return black. Is there a way that I can check what the sampler is returning without using the color look method? I know apitrace didn’t have any glsl watching functions.

Other than that is there any other possibility that a cube map texture might mess up the whole shader?

So you check GL_LINK_STATUS? glGetError() won’t report an error for compilation or linking failures.

Errors with the state used by the program can only be reported when the program is actually used for drawing.

Lots of things can mess up the operation of the shader as a whole. Many of them can’t reasonably be detected on the CPU side, and there’s no error-reporting mechanism available to GPU code.

I do check the shaders after they have been compiled and they seem to compile and link fine. And the shaders do work entirely as long as the float value I calculate in my shadow calculation function is never actually returned.

The difference is

float shadowCalcPoint(vec4 fragLightSpace, int samplerInd, float bias){
    vec3 projShadowCoords =;
    float thisPointDepth = length(projShadowCoords);
    float closestDepth = length(projShadowCoords);

    if(samplerInd == 0){
        closestDepth = texture(shadowCube0,;
    if(samplerInd == 1){
        closestDepth = texture(shadowCube1,;
    if(samplerInd == 2){
        closestDepth = texture(shadowCube2,;
    if(samplerInd == 3){
        closestDepth = texture(shadowCube3,;
    //If this returns 0.0 the shader works fine. But if I remove it it uses the r value returned from the texture, which should be valid but isn't.
    //return 0.0;
    return (closestDepth > (thisPointDepth - bias)) ? 1.0 : 0.0;

You can also see that very little has to change with the shader in order to get it to run, so it’s most likely this part of it. Also shadowCubes0-3 are all samplerCubes that have depth textures. Which should also be valid.

I know that I could be using an array but I just ported it from glsl 120 so I’m just leaving it that way. Regardless you can see that closestDepth determines the final outcome of the function, but if it’s used in the comparison operator it returns neither 1.0 nor 0.0, because either one would be perfectly valid. But the comparison is binary, so it doesn’t make a heck of a lot of sense why it’s not working. :dejection:

If you don’t use the result of the texture() call, the call itself will be omitted. In turn, that will typically cause other code to be removed. In other words, if you uncomment the “return 0.0;” statement, that entire function becomes just:

float shadowCalcPoint(vec4 fragLightSpace, int samplerInd, float bias) {
    return 0.0;

One version calls texture() for a samplerCube, the other doesn’t (even if the calls are in the source code, they’re removed by the compiler). If that texture() call results in an error, the behaviour of the shader is entirely undefined.

Check that the texture is complete (all 6 faces are defined), that it isn’t using a filtering mode which involves mipmaps, that none of the faces are bound to a FBO. Try replacing the cube map with one using a colour format rather than depth.

Unfortunately, it’s infeasible to list all of the factors which could be relevant. Try to find the closest case which does work. If the program is large, try to reduce it to a minimal test case which still demonstrates the problem (e.g. rendering a single triangle using a cube map which was initialised with test data rather than being rendered into).

Due to the problems we’re having with debugging this, I’m going to show my code so that if I am missing anything you can let me know.

The cube map depth texture is initialized like this:

void ShadowCube::Init(){
    glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapBuffer.getID());

    for(int i = 0; i < 6; i++)
        glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0,
            GL_DEPTH_COMPONENT32, shadowMapW, shadowMapH, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);

    glBindFramebuffer(GL_FRAMEBUFFER, frameBuff.getID());
    for(int i = 0; i < 6; i++){

    glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
    glBindFramebuffer(GL_FRAMEBUFFER, 0);


checkFBO outputs FBO errors. This set of texparamters purposely removes the mipmaps. Then it procedurally works through checking the cube map faces and making sure they can be attached to the FBO.

So after all that’s done, it’s updated like this:

struct CameraDirection
    GLenum CubemapFace;
    glm::vec3 Target;
    glm::vec3 Up;

static const CameraDirection CameraDirections[6] =
    { GL_TEXTURE_CUBE_MAP_POSITIVE_X, glm::vec3(1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f) },
    { GL_TEXTURE_CUBE_MAP_NEGATIVE_X, glm::vec3(-1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f) },
    { GL_TEXTURE_CUBE_MAP_POSITIVE_Y, glm::vec3(0.0f, 1.0f, 0.0f), glm::vec3(0.0f, 0.0f, -1.0f) },
    { GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, glm::vec3(0.0f, -1.0f, 0.0f), glm::vec3(0.0f, 0.0f, 1.0f) },
    { GL_TEXTURE_CUBE_MAP_POSITIVE_Z, glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, -1.0f, 0.0f) },
    { GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, glm::vec3(0.0f, 0.0f, -1.0f), glm::vec3(0.0f, -1.0f, 0.0f) }

void ShadowCube::Update(glm::mat4){
    GLint viewPort[4];
    glGetIntegerv(GL_VIEWPORT, viewPort);
    glViewport(0, 0, shadowMapW, shadowMapH);
    glClearColor(FLT_MAX, FLT_MAX, FLT_MAX, FLT_MAX);

    for(int i = 0; i < 6; i++){
        glBindFramebuffer(GL_FRAMEBUFFER, frameBuff.getID());
        glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapBuffer.getID());

        viewProjectionMatrix = glm::perspective <float>(glm::radians(90.0f), shadowMapW / shadowMapH, 1.0f, 500.0f) *  glm::lookAt(myCenterPos, myCenterPos + CameraDirections[i].Target, CameraDirections[i].Up);

        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, cubeMapBuffer.getID(), 0);
        //Grab current view before adjusting

        GLuint vpLoc = glGetUniformLocation(depthProgID.getID(), "depthVP");
        glUniformMatrix4fv(vpLoc, 1, GL_FALSE, glm::value_ptr(viewProjectionMatrix));

        vpLoc = glGetUniformLocation(skelDepthProgID.getID(), "VP");
        glUniformMatrix4fv(vpLoc, 1, GL_FALSE, glm::value_ptr(viewProjectionMatrix));


    glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    glViewport(viewPort[0], viewPort[1], viewPort[2], viewPort[3]);

The FBO is bound along with the cube map texture. Then the texture is bound face by face to the FBO. The camera is set using the view and projection matrices, then it draws all of the objects in the scene using a simplified shader simply meant to draw it to a depth texture, so no colors. It seems to work fine, according to glReadPixels it does draw the depth geometry. I’ve dried calling glFrameBufferTexture2D to unbind the faces, but it still won’t sample from the texture.

Then after all of the shadow maps have been drawn, I bind them to the main render shader using this snippet:

void PointLight::Draw(ShaderObj* myProgID, int inShaderID){
    pointShaderLightIDs* pIDs = pointIDs[myProgID];
    glUniform3f(pIDs[inShaderID].colorID, color.x, color.y, color.z);
    glm::vec3 myPosition = myTransform.getPosition();
    glUniform3f(pIDs[inShaderID].posID, myPosition.x, myPosition.y, myPosition.z);
    glUniform3f(pIDs[inShaderID].attenID, attenuation.x, attenuation.y, attenuation.z);
    glUniform1i(pIDs[inShaderID].exID, 1);

    glUniform1i(pIDs[inShaderID].shadSampID, inShaderID);
    glUniform1i(pIDs[inShaderID].shadTexture, 10 + inShaderID);

    dynamic_cast<ShadowCube*>(masterShadowMapList[inShaderID + MAX_LIGHTS])->myCenterPos = myTransform.getPosition();
    dynamic_cast<ShadowCube*>(masterShadowMapList[inShaderID + MAX_LIGHTS])->Update(glm::mat4(1.0));
    glActiveTexture(GL_TEXTURE0 + 10 + inShaderID);

        dynamic_cast<ShadowCube*>(masterShadowMapList[inShaderID + MAX_LIGHTS])->cubeMapBuffer.getID());

shadSampID determines which of the shadowCubes it reads from in the fragment shader. shadTexture tells it that the shadowCube will be bound to 10 + whatever id (0-3) it is on the point light list. The center position is given to help the f-shader to have a direction. The shadow cube is updated there, which is shown above. But then the active texture ID which was given to the sampler earlier is now given the actual cube map texture.

I hope at least some of this can help…I’m not sure where I went wrong, it’s not exactly a complicated set of concepts, you just bind the cube map’s faces to the framebuffer and draw each face in turn. I also made sure to check if the framebuffer was complete as I drew it before; it seems to be working on that front. Then it binds the cube map texture to the shader, then it’s sampled with the direction of the light to the fragment and it’s meant to return 0.0-1.0.

I’m going to try to run a working depth cube program in the meantime, but if you could take a quick peek through my code I’d appreciate it.

I can’t see anything in the posted fragments which looks problematic, particularly if rendering into the cube map faces works.

GL_DEPTH_COMPONENT32 isn’t a required format, but if it’s supported at all, it should work both as a framebuffer attachment and as a source texture. If it isn’t supported, it wouldn’t work for either.

I managed to get my engine compiling for windows and enabled the shadowcubes again. I watched the behavior of the light, which turned out very grainy for some weird reason. I ran it through gdebugger and got some glActiveTexture errors.

Do you have any recommendations on what to do next to find the problem?

Find the closest thing which does work. E.g. what happens if you use six 2D textures instead of a cube map?

I found what was probably a major problem. The shaders were indeed invalid, and it took a stroke of luck to find the function to tell me why. But the shaders themselves were invalid but only because the unused samplers were being left at 0.

For future generations I will submit that you cannot have 2 different kinds of samplers with the same texture unit (GL_TEXTURE0) for both, and that it will give you undefined behavior, and that this error can be detected by glValidateProgram(GLuint shaderID) and then the error message can be given with this checker function:

void checkShader(){
        GLint Result = GL_FALSE;
        int InfoLogLength;
        GLint valid;
        glGetProgramiv(getID(), GL_VALIDATE_STATUS, &valid);
        glGetProgramiv(getID(), GL_INFO_LOG_LENGTH, &InfoLogLength);
        std::vector<char> ProgramError(std::max(InfoLogLength, int(1)));
        glGetProgramInfoLog(getID(), InfoLogLength, NULL, &ProgramError[0]);
        std::cout << "The program is " << valid << " in validity
        fprintf(stdout, "%s
", &ProgramError[0]);

My next problem is dealing with the grainy textures that are given, which will probably be a major problem as well, but at least I know it will be my own code’s fault. :smiley:

One more question, what would be the method of determining the sample in 2D coordinates from a 3D vector, if you had 6 2D textures on an atlas?

But if I have further problems I cannot solve I will post again. But until that point thank you very much GClements for all of your advice.


That sounds like an issue with the state being used by shaders rather than with the shaders themselves. Issues with the shader code will show up in the compilation and/or linking status and in the logs. Issues with the state won’t (they can’t be detected prior to the draw call, and in some cases simply can’t be detected on the client side).

Indeed. glValidateProgram() will tell you if the current state is invalid for a given program. It should be able to detect any situation where a draw call with that program will generate an error.

What do you mean by “grainy”? Common issues with shadow maps include visible pixellation (due to the limited spatial resolution of the texture) and depth-fighting artefacts (due to the limited depth resolution and non-linear mapping between Z and depth). The former can be reduced by using cascaded shadow maps, the latter by increasing the near distance or using linear depth.

The process is described in §8.13 of the 4.5 specification. Briefly: first determine which of the three coordinates (X, Y or Z) has the greatest magnitude, and the sign of that coordinate; this determines which of the six faces is used. Then the other two coordinates are divided by the larger coordinate, which gives values in the range -1 to +1. These are mapped to the 0…1 range and used as texture coordinates on the given face. The choice of which coordinate is S and which is T, and the signs, is given in table 8.19

Face  S  T
+X   -Z -Y
-X   +Z -Y
+Y   +X +Z
-Y   +X -Z
+Z   +X -Y
-Z   -X -Y