# Depth buffer plane clipping test

Hey dudes, I was wondering if there is some kind of depth test or similar that only writes to the depth buffer if the line between the camera and the geometry intersected a certain plane, and if you could have it only write the depth value from the point where the line intersected the plane to the vertex.
What I want to do is do a depth buffer test on a plane that represents water, and only have the depth value from the water surface to the observed vertex. I want to use this data for the beer law, but I just realized that all this time I have been taking the depth from the point of view to the vertex, instead of the depth from the surface to the vertex.

Can’t think of any way to make myself more clear without pictures, sorry =/

I have been thinking of taking the normal depth buffer and then caluclate the distance from the eye vector to the surface and then reduce the depth buffers value with that amount, but I want to know if there is some of doing what I am asking for cheaper?

I want to use this data for the beer law, but I just realized that all this time I have been taking the depth from the point of view to the vertex, instead of the depth from the surface to the vertex.

It would be much easier to simply generate the water-to-point distance in the fragment shader as needed with ray-intersection tests (or whatever works to compute the value). Depth is always measured linearly parallel to the plane of projection, not radially. So you can’t use the computed depth component to get a radial distance from the eye.

Allright, sounds good(stupid tutorial at bonzai software for not even being aware of how wrong it is to just take the depth value from eye to ground surface, but thats what you get for following such an old tut I guess). Any idea of how I would go about to do that? Would i have to pass the vertex data regarding the ground surface to the water shader?

Is the beer law something overly hardcore? I have been snoping around for openGL beer’s laws tutorials, and all of them are for raytracers… =/

Maybe I could do some kind of one time only depth calculation for water-to-subemersed-surface and save it to a texture? Then I would obviously only get a mock-up, but I would still know how the deep the water is at any point in the fragment shader, but not how much water the light goes through from the point to the eye.

Edit: After reading through the tutorial( http://www.bonzaisoftware.com/water_tut.html ) again I see that they state that it is only an approximation, but that it “works well”. I would like to differ on the account for works well, my attempts at using the depth buffer for giving shallow water more refraction/less watercolor and the other way around have not quite yielded a good result.

Maybe I just suck =)

3d pic from the top:
http://www.thesis.strumpy.net/?p=32
Can anyone tell if that depth is from the actual water plane to ground surface, or if it just and ordinary depth buffer with plane clipping?(eg if the camera moved further away the depth buffer would become whiter)

Hi, maybe it’s what you are looking for.
There are some ideas about surface based clipping with
depth textures for volume rendering. For example, there
are to depth textures. one holds the front depth of your
bound representation and one the back. in the shader
you can compare with current depth values if they
are with in or out side these two depthplanes.
Maybe you can checkout books.google.com and look for the book
Real-Time Volume Graphics at page 391 on the bottom. There
they explain clipping clipping with depth layers.

regards,
lobbel