Interpolating depth within fragment program

Can somebody please give me some guidance on how to linearly interpolate depth within a fragment program? I’m retrieving the Z coordinate (Window coordinate space) from a depth texture and I want to linearly interpolate it. I believe I need to transform Zwin to clip coordinates, perform linear interpolation, and than transform back from clip to window space, but I’m not having much success with the two transforms.

Thanks for any pointers!

By interpolate you mean bilinear-filter? (sample 4 texels, etc)

Apologies, I should have been more clear. I’ve implemented a ray caster that renders isosurfaces from 3D textures. The fragment shader uses the current 3D texture coordinate as the starting point of the ray, and the ray stopping point is passed as texture lookup from a previous rendering pass. The fragment shader samples the 3D texture along the ray from front to back until an isosurface is found, at which point a color fragment is generated. The detection and coloring of the isosurface works correctly. However, I also need to compute the depth coordinate at the point where the ray intersects the isosurface. This is (should) be done by interpolating the Z value at the point of isosurface intersection between the starting and stopping point of the ray. Unfortunately, I believe this interpolation is non-linear in screen coordinates. The starting Z value is given by gl_FragCoord.z. The stopping Z value is given by a lookup in a depth texture resulting from a previous rendering pass (the entry and exit points of the ray are the front and back facing faces, respectively, of a bounding box of the scene).

I believe I need to transform the starting and stopping Z coordinates from screen space to clip space (or maybe eye space) to correctly perform linear interpolation between the two. So, my first question is is this the right way to go about computing the Z value at the ray intersection. If so, second question: how do I transform the Z coordinate between screen and clip space, and then back again?

I’d try multiplying the 8 z samples by inverse ModelView matrix (which becomes a series of 8 simple DOT4 instead of 8 mat4*vec4).

There’s also some better-precision maths here on the forums, search for topics like “reconstructing depth”. There the znear, zfar and fov are used to gain faster and precise linear depth.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.