I would setup shadow mapping with opengl. I have read several source codes that implement this technique, but when they have to render scene at the light point of view, they draw polygons as if they would draw these one on the framebuffer…
But the only thing we need when we render the scene at the light point of view is the Z-buffer and what I would to know is :
When I render the scene at the light point of view, can I stop rendering at the Z-buffer?
I think I have understood how I can read the Z-buffer data with opengl functions.
But I do shadow mapping using Opengl shaders, I think it is easyly than with opngl extensions…
So, This is what I want to do:
Render the scene at the light point of view and save the Z-buffer in a texture
(here there is not particular thing to do with vertex ans fragment shaders)
Render the scene at the camera point of view.
It is here that shaders intervene.
In the vertex shader, I think there is nothing particular…but In the fragment shader where I set up per pixel lighting, I think that I have to watch if each fragment is at the light point of view, in front of or behind the fragment in the Z-buffer saved previously in order to know if it is a lighted or a shadowed fragment.
=> Here, I have some questions that I ask myself:
gl_Vertex is a special variable of GLSL where the current vertex coordinates are saved. And if I save this variable in a ‘varying’ variable, is this last one interpolated in the fragment shader? It could be very interesting if I want to modulate interpolated fragment coordinates with light modelview and perspective matrices!
Moreover I don’t know I can read Z buffer saved data (the texture) in opengl shaders, but I think it is the same thing as read color in texture mapping with shaders…
I hope that this explanation is clear understanding! Thank you for for help me!
Everything you put into varying variables in vertex shader will be interpolated in fragment shader - that’s what varying keyword is for.
I don’t know I can read Z buffer saved data (the texture) in opengl shaders
Use this uniform:
and this sampling operation:
color = shadow2D(shadowTexture, texCoord);
and don’t forget to enable depth comparison for your depth texture, it won’t work otherwise
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
thank you for reply me
But I don’t understand why you do that:
color = shadow2D(shadowTexture, texCoord);
I don’t want to create a shadow texture, I want to create per pixels shadows. So for each fragment what I want to do in pseudo code is:
(in the fragment shader)
varying vec4 fragmentPos //interpolated fragment coordinates
/Here I calculate fragment coodinates in light coodinates_ lightModelviewMatrix is the modelview matrix at the light point of view/ fragmentPos=perspectiveMatrixlightModelviewMatrixfragmentPos
/* I still don’t know the syntax in order to use Zbuffer so I invent ^^ */
So is it correct? I hope that fragmentPos after the last process is correct in order to find the correct fragment in the Z-buffer…
But I have seen that glPosition is calculated with:
gl_Position = ftransform();
But it is also
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
Now, what I want to do in order to use modelview matrix at the light point of view and perspective matrix is to give these ones to fragment shader through uniform mat4 variable in the rendering loop…
But there is the gl_ModelViewProjectionMatrix mat4 variable that GLSL offers in vertex shader. So can I also get it in my OGL program out of shaders instead of save projection and modelview matrix ( at the light point of view) seperately and modulate them in fragment shader?
shadow2D and shadow2DProj do the job for you ! Have a look at their definition.
Use ftransform or the explicity matrix multiplication in your vertex shader. But I must admit I didn’t really understood what you said in your last paragraph.
Thank you for this reply, excuse me for this late answer!
Jide> Yes, thank you, i am going to look a these functions specifications. GLSL is very practical! I would not think that shadow mapping is already integrated is this language!
I am also doing the same job these days.
I Think you had better to understand the concept “Projective Texture map” fist.This concept is very usefull ,because when you are rendering every fragment form the eye view,you have to projecte the fragment’s world coord into depth map texture space;
Maybe you can get the answer both from the book ‘The Cg Tutorial’