Is there effective way of storing distance data?

I’m conceiving how to arrange a buffer for distance vaule of object’s vertex in a camera’s view to make it possible to get rendering the distance buffer done during rendering the object in a rendering pipeline path.
The ditance data is needed for rendering an atmosphere in a postprocess rendering.
Is there an effective and smart way of get this done?
Or isn’t there any way but taking an extra rendering pipeline for rendering a distance buffer?

What is a “distance buffer”?
If you need per-vertex distance from the camera, that is trivial to calculate in the vertex shader (in the first pass). If you need per-pixel distance, you can use depth buffer values. Try to use values “as are”, but if you are not satisfied with the effect, you should radially change the values.

I need distance data per pixel. These data are needed for rendering atmosphere in post-rendering. These distance data are not to in normalized device coordnidate but in camera’s view space coordinate
I’m exploring any good idea of storing these data during a rendering pipeline path of rendering the object without additional rendering pipeline path for rendering distance data into a buffer.

The Z-buffer does not contain distances to the camera; it contains distances to the camera’s Z-plane. So it won’t help you.

The only solution is to compute the distance to the camera-space origin and write it to a 32-bit single-channel float buffer. You create an GL_R32F texture, and set it as a render target with an FBO. Then write the distance from your fragment shader.

I have said that depth buffer can give a rough approximation for free. If it doesn’t serve the purpose, than fragmet shader can be used.

You can exploit the Z-Buffer with the distance to the camera’s Z-Plane and add the distance to the camera’s center (projection center (0,0,0)) and the intersection of the view ray (center to you 3D point in your scene) with the Z-plane.

Center_Cam: center of the camera (origin)
Z-Plane: projection’s plane of the camera
P_World: 3D Point in the World-Space


The line (C_cam, P_World) intersect Z-Plane.
The point is I_ZP

I_ZP = (x_window, y_window, z_min),
with z_min = distance(C_Cam, Z-Plane) = cste

Z-Buffer store the distance between I_ZP and P_World.

So the distance Distance_Final between C_Cam et P_World is
Distance_Final = Z-Buffer(P_World) + distance(C_Cam, I_ZP)
distance(C_Cam, I_ZP) = sqrt(z_min² + sqrt(x_win² + y_win²))

I have a few difficult to explain clearly, but it’s not very difficult :smiley:

No, that is far from the only solution. And yes, you can use the depth buffer value to get the eye-space Z value if you want. For more details on those and other techniques, see these posts by Matt Pettineo:

and this post by Emil Persson:

Just be conscious of how the method of storage you choose quantizes with distance.