Shadow Mapping work on Nvidia but not on Intel?

I always worked on the engine at home in a nvidia card and then I brought it to compile here in an intel card (and to my surprise it worked).

Everything works alright, even the bloom does not have problems to render in the FBO. However, the shadow mapping is still not working.

This thread is more or less the following thread backwards:

But by looking in the thread above I still could not solve my problem here.
No errors in the FBO status. I printed the depth to an image and there is only a white image there. Decreased the farVal of the glOrtho and nothing… seems like it’s not related to depth precision.
Still puzzled, any thoughts are highly appreciated.

How can we help without seeing any code?

Search the specifications for “depth_component”, “shadow”, and “undefined”, then make sure you’re not doing any of those things.

Also remember to reset the texture depth compare parameter when rendering the depth texture for debugging purposes (“sampler2D”):


Randall, sorry for not providing the code in the first post. Here it is:

        // Drawing for Shadow Mapping

        GLfloat ratio = ( GLfloat )window_width / ( GLfloat )window_height;
        GLfloat light_look_at[] = {0.0f, 0.0f , 0.0f };

        glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, shader->shadow_fbo);

        glViewport(0,0, shader->shadow_map_width, shader->shadow_map_height);


        glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);

        glMatrixMode( GL_PROJECTION );
        glLoadIdentity( );
        glOrtho( 0.0f, shader->shadow_map_width, 0.0f, shader->shadow_map_height , 300, 3000);


        gluLookAt( env->light_position[0] + translatex, env->light_position[1], env->light_position[2] + translatez , light_look_at[0] + translatex, light_look_at[1], light_look_at[2]+translatez, 0.0, up , 0.0);



        //------ Draw Scene

        glViewport(0,0, window_width, window_height);

        glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
        glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);




        glMatrixMode( GL_PROJECTION );
        glLoadIdentity( );
        gluPerspective( 45.0f, ratio, 0.1f, 10000.0f );



And the setTextureMatrix() is:

void setTextureMatrix( )
        static double modelView[16];
        static double projection[16];

        const GLdouble bias[16] = {
                0.5, 0.0, 0.0, 0.0,
                0.0, 0.5, 0.0, 0.0,
                0.0, 0.0, 0.5, 0.0,
        0.5, 0.5, 0.5, 1.0};

        glGetDoublev(GL_MODELVIEW_MATRIX, modelView);
        glGetDoublev(GL_PROJECTION_MATRIX, projection);



        glMultMatrixd (projection);
        glMultMatrixd (modelView);


Remdul, thank you, I added the line as well as tried to disable the debugging but no effect.

Thanks arekkusu, I am taking a look at the specification now.

No good, still couldn’t solve the problem.

Actually, things has gotten worse. My implementation of the shadow mapping follows the tutorial from as well as a mixture of other explanations and tutorials.

I run the code from the tutorial without any problems here and I tried to modify the tutorial’s variables as they are in my code to check if I could insert some error. Still, the tutorial’s code works perfectly.

The mistake seem to be in the part where it is drawn to the FBO, since the shadow texture does not get drawn at all (entire white Figure). But I can’t see any mistake there :confused:.

Moreover, I am using Intel card under linux, so this means my driver is implemented by Mesa Graphics: Maybe, it is some problem related with it. I will update the library and try it out.

In the end it was a driver problem. Mesa graphics 8.0 (which was ironically launched one day before my first post) solved the problem nicely and even add support to OpenGL 3.0 and GLSL 1.30. Now it is easy to know why Carmack programmed his own drivers.

The hurdle was that the shadow mapping worked for a simple example and not for my engine. Trying to figure out why burned my brain.

Thank you very much for your insights.