Color and shadow mapping, is it possible?

Is it possible to use shadow mapping but not to a different texture but to a color?
I have a cube whose vertices have colors as attributes. I don’t use textures for this cube. Is it possible to apply Shadow Mapping?
Has anyone seen any tutorial on how to do this somewhere?
Because truth be told, everything I saw was about textures and I don’t even have any idea how to modify the shader.

I don’t really understand what you mean by “use textures”. Shadow mapping is a way of rendering the viewpoint of a scene to a texture and then using the depth data stored in that texture to tell whether a particular location in space is in shadow or not. By definition, shadow mapping involves at least one texture: the shadow map (which can be built of multiple textures).

Now, perhaps by “use textures”, you are talking about a texture which provides color information to a surface. Well, as previously stated, shadow mapping is a process for determining if a point is in shadow from a light source. This determines whether or not you add that light’s contribution to the final color for that fragment. It does not care about how you get the (for example) diffuse color of a surface for that fragment; it is solely about how you decide which lights contribute to that particular fragment.

So by that definition of “use texture”, such a texture is orthogonal to the shadow mapping process.

Thank you for your answer.
Of course, I know that I have to generate a cardboard in which the depth data will be kept.
When I wrote that I don’t use textures, I meant the objects that I display.
Unfortunately, I do not know where to get this data.
For example, to create the depth texture I use: such shaders:

 in vec4 a_position;
 uniform mat4 u_lightMatrix;
 out vec4 position;
void main () {
     position = u_lightMatrix * a_position; // I can calculate the position

Unfortunately, I don’t know how to generate the color for the depth texture and so far I left it like this: (to compile at all)

 out float FragColor;
    void main () {
      FragColor = 1.0;

I have no problem with displaying objects, I just don’t know how to generate the necessary color changes.

Should I do it in VS or is it better in FS?
I read that it is best to do all calculations in Verex Shader because it is faster.

Depth textures don’t have colour, only depth. When you’re generating the depth texture, the fragment shader can be empty, and the FBO doesn’t need to have any colour attachments.

In the pass where you render objects using the depth texture to generate shadows, testing the fragment’s depth against the value from the depth texture produces a value which you use to modulate the light’s intensity. Any colour information is a property of the surface being rendered and possibly of the light.

the fragment shader can be empty,

Well, not really. When I leave it empty, I get the message:

GL_INVALID_OPERATION: Active draw buffers with missing fragment shader output.

the value from the depth texture produces a value which you use to modulate the light’s intensity

Well, how to do it?
I tried like this:

outColor = v_color * texture2D (u_projectedTexture, position);

But he gets an exit error.
Generally I don’t understand how textures in the shader work and so I don’t use them, but a shadow would come in handy.

You missed the part about:

the FBO doesn’t need to have any colour attachments

Though that probably should read “the FBO needs to not have any colour attachments”.

You implement shadow mapping. This is the core, fundamental algorithm of shadow mapping.

You really should fix that. A texture is just a lookup table, a big 1/2/3-dimensional array. You index that table (in one, two, or three dimensions) to pick a value out of the table. There’s more to it than that, but that’s the idea in a nutshell.

Realistically, you need to get a book on (modern) OpenGL and read it.

Trying to use random code found on the internet without any real understanding of what that code is doing isn’t going to work. Far more so with OpenGL than with programming in general.

1 Like

I have the book “OpenGL Superbible Fifth edition”
I learned a lot from it. Unfortunately, shaders have always been twisted for me. And I don’t really understand them even after reading the book. I just need to write more code. Eventually there will be a breakthrough and I will understand what it is all about.

Are you familiar with coordinate geometry (“vector math”)? Because shaders are the one area where you need to understand this. Historically, a 3D rendering API largely eliminated the need to understand most of this. But shaders require you to write code for things which would previously have been done for you.

Yes, I understand the geometry of vectors. The main problem is converting to different data types and understanding where to get some data from. For example, to read wallpaper with depth. I am using the “texture (texture, cord)” function which requires texture coordinates. Where to get these coordinates? I set up the vertex coordinates sent to FS. But I don’t know if this is good reasoning.
As for converting to other types of data, eg I don’t understand why in VS

float light = dot (normal, light_direction);
v_color.rgb * = light;

It passes

And in FS

float shadowLight = (inRange && projectedDepth <= currentDepth)? 0.0: 1.0;
outColor = v_color.rgb * shadowLight;

It no passes.

For shadow mapping, texture coordinates are derived from world coordinates by treating a light source as a camera.

To generate the shadow map, you render the world with the light as the camera, so you get a rendering of the world (in the depth buffer) from the light’s viewpoint. When you then render the world for display, you transform the 3D object-space coordinates according to both the “real” camera and the light camera. Those transformed by the real camera go into gl_Position while those from the light camera are passed in user-defined variables. In the fragment shader, the light-space X/Y coordinates are used to read a depth value from the texture which is compared against the light-space Z coordinate. If the two are equal, then the primitive currently being rendered is the same one which generated the values in the depth texture (i.e. it’s the closest primitive, in light space). Otherwise, the value from the depth texture will be closer and the fragment being rendered is in shadow.

The calculation being performed in the VS ignores the shadow map. It’s just calculating the surface colour based upon the light direction and the surface normal; nothing else in the scene affects it.

For shadow mapping, texture coordinates are derived from world coordinates by treating a light source as a camera

Ok. So I have to add the light camera position to the scene rendering program, and then read the coordinates?
I vaguely understand it.
The only thing it passes to the main scene shader program is texture.
Could you give us some pseudo-code that illustrates this?

Shadow rendering, vertex shader:

#version 130
uniform mat4 light_matrix;
in vec4 a_position;
void main()
    gl_Position = light_matrix * a_position;

Shadow rendering, fragment shader (empty):

#version 130
void main()

Main vertex shader:

#version 130

in vec4 a_position;
in vec3 a_normal;

out vec3 position;
out vec3 normal;
out vec4 texcoord;

uniform mat4 light_matrix;
uniform mat4 modelview_matrix;
uniform mat4 projection_matrix;

void main()
    vec4 pos = modelview_matrix * a_position;
    position =;
    gl_Position = projection_matrix * pos;
    normal = (modelview_matrix * vec4(a_normal, 0.0)).xyz;
    texcoord = light_matrix * a_position;
    // transform from [-1,1] (NDC) to [0,1] (texture coordinates)
    texcoord *= vec4(0.5,0.5,0.499,1.0);
    texcoord += texcoord.w * vec4(0.5,0.5,0.5,0.0);

Main fragment shader:

#version 130

in vec3 position;
in vec3 normal;
in vec4 texcoord;

uniform sampler2DShadow shadow_map;

void main()
    vec4 ambient = /* omitted for brevity */;
    vec4 diffuse = /* omitted for brevity */;
    vec4 specular = /* omitted for brevity */;

    float k = textureProj(shadow_map, texcoord);

    gl_FragColor = ambient + k*(diffuse + specular);


light_matrix is the combined model-view-projection matrix for the light. The depth texture has been set up as a shadow map, i.e.:


It’s not bad. There is progress.
It generates a texture for me and even applies it to my model. It looks like a shadow.
Unfortunately, the shadow is made smaller. I had to use a scaling matrix and it started to resemble what it should, but it is shifted a lot. It couldn’t be too easy.
This is normal?
The good news is that I’ve finally started to understand how it works.


Does light_matrix have the same value for both passes?

Is the viewport set correctly for both passes? For the above, the first pass needs to have the viewport set to match the dimensions of the shadow map (you need to set this explicitly; viewports are context state, not part of the framebuffer state). It shouldn’t be larger; and if it’s smaller (i.e. you want to use only a portion of the texture for whatever reason) the section following

// transform from [-1,1] (NDC) to [0,1] (texture coordinates)

needs to be adjusted to map [-1,1]2 to whatever portion is used rather than to [0,1]2.

The NDC-texcoord mapping could be folded into light_matrix, but only for the second pass. I put it in the shader so that both passes use the same value for light_matrix.

Does light_matrix have the same value for both passes?


needs to be adjusted to map [-1,1] 2 to whatever portion is used rather than to [0,1] 2.

Easier said than done.
I will read about NDC-texcoord mapping.
I noticed something interesting. The shadow map is skewed at 45 degrees when using the 50, 50, 100 light positions however, when I give the light positions 50, 0, 100 the texture/shadow is spread straight on the objects (only shifted strongly)


I have located a texture shift.
I had set:
GL.viewport (0, 0, width, height); // width and height are the size of the window
And it should be:
GL.viewport (0, 0, 1024, 1024); // 1024 is the size of the texture
Now the position of the texture is not shifted.
Unfortunately, it is turned 90 degrees to the left. I have run out of ideas on how to fix this.
In addition, the orthographic view behaves like a perspective view.
The farther the camera pulling away, the shadow textura decreases.

I have located the error. I read that matrix multiplication is not commutative. I looked at the code and obviously multiplied the position by the view matrix, and it should be the other way around.
I would like to thank everyone who helped me.
I have learned a lot thanks to you. Nobody has ever helped me like you on this forum.
Again thank you very much.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.