When rendering to a fbo?

I’m using opengl 3.2 (pure context) with GLSL1.5
Nvidia 9500 gt

I finally got my cubemap shadowing working with a point light. The weirdest thing about it is that it was a problem of the depth test. when I used a R32F format texture to attach to the fbo and rendered to each of the faces it was like depth testing didn’t matter but the order of how I rendered the object instead.

So If I had

renderCube() //this cube is smaller

renderCube() //this cube is taller

only the taller one could cast a shadow onto the smaller. But the small one can’t (This is when I put the level of the light to about the level of the smaller cube so I should be able to see it cast a shadow onto the taller one).

Vice-versa if I had switched the order.

The way I fixed this problem was by making my cubemap accept the depth component instead (which I know is possible since I thin nvidia 9000 series and up). Not only that in the shader I had to write into gl_FragDepth to actually get depth values.

Shouldn’t the fbo even if I just use a color attachment come up with depth as long as the depth test is enabled?

lol, where do you think FBO will get depth from, then?
It can’t use your main FB depth, just live with that.

Next, can you explain the need to write gl_FragDepth?

I ended up using GL_DEPTH_ATTACHMENT instead of GL_COLOR_ATTACHMENT0.

Since I was doing point light shadowing with a cubemap and the range of values needed to be from [0,1] I apparently needed to write in some kind of linear equation which I found:

dot(lightvec, lightvec) * (1.0/lightradius^2).

This actually worked. Also I think if I don’t write to it there will be no depth value.

The thing is this actually worked out.

It’s just I kept seeing many tutorials and src code using a texture with a R32F format and was going in that direction so I’m more curious of why I couldn’t get it working with that format instead of using DEPTH_COMPONENT24.

I mean was I supposed to attach a R32F texture to the framebuffers GL_DEPTH_ATTACHMENT?

Have you done any shadow mapping before?
Depth values are always between [0,1] at the final stage.

There is no need to compute the depth per fragment: just set up a proper projection matrix.

In tutorials they use a color texture for one simple reason: rendering to the cube map depth texture was not possible, to say, a year or two ago.

I have done shadow mapping before successfully so I do understand the idea of it.

Yeah I knew the values should be between [0,1] When I wrote the depth fragment my output for the color was:



Done in Vetrex Shader:

lightvec = lightPosition - inPosition; (inPosition was multiplied by worldMatrix)

/////////////////////
Fragment Shader

out frag
{
   vec4 Color0;
}Frag;

in vec3 lightVec;
float lightRadius = 100.0; //

void main()
{
  float depth = dot(lightVec,lightVec) * (1.0/(lightRadius*lightRaduis));

 Frag.Color0 = depth;
}

so I can’t see the values being out of the [0,1] range as long as the object in within that raduis.

Here’s an image of when I tried to attempting itwith the Color_ATTACHMENT


(The image is bigger than what is displayed here on the forum)

In The red area you can see it trying to cast a shadow onto the taller cube but it just can’t.

That for some reason had to do with the order I was rendering the objects when it came to writing to the fbo.


fbo->Use();

for(int i=0; i<6; i++)
{
 fbo->CubeMap(i)

 glClear (color and depth)

 render small object
 render tall object

 //etc only the 2 cube objects are affected everything else seems fine
}

tall object can cast onto small object but small object won’t cast onto tall object.

vice-versa if I switch the lines of code that render the objects.

So you do understand the idea, that’s good.

Next, why are you posting here now the incorrect way of doing shadow mapping (color without depth buffer)? Isn’t it clear for you (after my first answer) that color buffer is not required (you can use it for compatibility reasons still)? Or I haven’t got your problem correctly?

Next, tell us why do you want to write the depth manually.

Yes, it is for compatibility reasons that I was trying to get it to work like this.

This has to do with point light cubemap shadows nothing else. A lot of those tutorials would attach a color texture to the fbo’s color attachment and not the depth attachment. They would use a rgba texture to manually write the squared distance of the lightPosition to the objectPosition and store that into the textures red channel. I never turned off the depth since I would of course need an object that’s closer to the camera view to hide an object behind it. It just seems like that (the occlusion) was not happening and that’s what I found to be weird.

I write to gl_FragDepth because I need to know the squared distance from the lights position to the objects so I can do the comparison in the actual rendering pass. I would be able to do the comparison in my light shader by taking the squared distance of the light to the object and compare it to what’s stored in the texture.

distsqrd > depth ? 0.0 : 1.0

DmitryM, I really appreciate you putting up with this so far it must be kind of annoying.

You don’t seem like getting what I’m saying.

Old method:
for each side of the cube:
setup the modelview & projection of the virtual spot light
color0 = cube_color[side]
depth = common_depth
activate FBO
clear color & depth
render scene, write depth into the color

New method:
depth = cube_map_depth
color = none
activate FBO
clear depth
render scene:
vertex shader: model space -> light space coords
geometry shader: for each side:
set projection
select layer
emit triangle
fragment shader: EMPTY

Now, I hope, the idea is clearer.
Conclusion: no need for gl_FragDepth modification in case of cube depth texture. No need for color in this case.

Oh thanks!

I wrote to the depth because I was not using the geometry shader.

Well, there is an median way:
select cube side on GL side
render into its 2D depth

But it doesn’t require you to write to gl_FragDepth either.
In fact, you should avoid writing it whenever possible. In all my shadow mapping, I do write it only for ESM, but try not to use it at all.

Sorry I wasn’t clear. I figured that the depth test would just work. But you had mention that there’s no depth to work with if one is not attached to the fbo. That was my problem with the old method. I assumed it would be able to do the depth test without any extra work. I found that I should have attached a renderbuffer to the depth_attachment so that depth testing would work.

Thanks you very much!!! Even more so for explaining how to do this with geometry shader. I can’t believe I missed that. I really did assume that the depth test would be able to work on a fbo automatically.

Ok, I still haven’t found out why do you need to write to gl_FragData, but I’m glad you got it to work.

It would be nice to see your progress in screenshots or something.
Good luck :slight_smile:

I’ll try to explain the gl_FragDepth thing…

I was having problems with the depth testing. I figured that if I attach the the depth_attachment instead I wouldn’t have that problem but if I did that I would still need the squared distance from the light to the object to be written as my depth. I can’t write to a color since I didn’t attach one. So I had to write directly into the depth buffer using gl_FragDepth instead. If I don’t it will not work.

If you think about it it eliminates a need for a color attachment and color texture. I would also only have to clear the depth buffer instead of both the color and depth buffer.

I did it as a means to get it to work. It’s not that I purely planned on doing it that way. This is the old method just without a color texture.

So you did it because your sampling code expected squared distance?
But what stops you from modifying the sampling together with the shadow baking shader?
If you don’t write to gl_FragCoord, the depth values written are the one from your gl_Position.z after perspective division and clamping, what is exactly what we need when doing shadow mapping.
What is important, the Z test occurs before the fragment shader execution, because the Z is already known (interpolated). This, of course, make your SM baking faster.

So you did it because your sampling code expected squared distance?

Yes. I found that if I used gl_FragDepth then the z-test is performed after instead using a custom depth equation instead to do the z-test.

gl_FragDepth = dot(lightVec, lightVec) * inversesqr(lightRadius*lightRadius);

It kept what matter in the range [0,1].

But what stops you from modifying the sampling together with the shadow baking shader?

I forgot to add a renderbuffer that would be used for depth so that opengl could perform a z-test. while I wrote my values to the r channel.

What is important, the Z test occurs before the fragment shader execution, because the Z is already known (interpolated). This, of course, make your SM baking faster.

Your right and I tried to do it that but in the beginning I had a problem with the z test because I didn’t add a renderbuffer for the depth buffer when I was using an fbo. So because of that opengl wasn’t even able to perform a z-test.

Since I figured it out I went back to using the R32F format texture and now it works.