Distance to closest pixel from view position

Hello everyone,

this is my first post in this forum. I hope I can state my problem appropriate. Here it is…

In my project I want to implement a slider, which enables the user to look inside a mesh, i.e. that the front face is cut to a corresponding degree depending on the slider.
To realize this, I first draw the back face and afterwards the front face of the object. While drawing the front face the fragment-shader checks, how far the current pixel is away from the camera.
Now depending on the distance I decide if the pixel shall be transparent or opaque. To make this algorithm work properly though, I need to know the distance of the current closest pixel of my object so I can define an appropriate ratio. Since I can rotate the object this distance also has to be update every frame.

So my question is, how can I efficiently calculate this closest pixel?
The only idea I have in mind, is to store the depth-buffer into a texture. But then I still would have to find the lowest depth value in my texture and convert this distance so it relates the calculated shader distances.

Is this algorithm generally a good idea, or is there even a better approach of doing such a look-inside-view?

Thanks in advance for any proposals!


You can perhaps compute a faster estimation with the closest vertex to the viewpoint ?
(ok, this isn’t “mathematically beautiful” but this is certainly very more fast and easy to compute than the closest pixel)

Or only compute the closest distance between the faces and the view point ?
Point-Plane Distance
(no, bad idea, because planes are infinites …)

Or use an in/out uniform value that store the closest pixel in a first pass and use this uniform value in a second pass ?
(ark … uniform/varying/attribut/const are read only :frowning: )


thanks for your suggestions. Since I’m quite new to OpenGL I’m not sure how to implement this. Geometrically this is clear to me.

Where can I do these calculations? As I mentioned, I can rotate the object. Therefore the closest vertex/face changes every frame. With the shaders I can only make calculations per vertex, right?

My mesh is stored in a VBO. So I should write my own function which iterates through all vertices/faces and compute the new nearest distance. This function then must be called after each rotation. Is that right?

Would this be then faster? Because I think this is still a lot of calculations…

Thanks again!


Why not let the slider affect the near plane cutoff in the view frustum?

Another way would be to calculate the distance from the eye to vertices in the fragment shader, comparing to a uniform that sets a distance limit, and then discard the pixel conditionally. You need to use the vertex shader to transform the vertex into view coordinates, and forward that data to the fragment shader.

And exactly there is my problem. The slider shall say in percentage, how far the mesh shall be opened. Currently I use a uniform with a fixed distance calculated by the length of the vertex which is farthest from the origin. This then defines the range (distRange), a vertex lies in. The ratio is then calculated by:
(viewpos - curFragDist)/distRange

If my object would be spherical, or almost spherical, this would be ok, but it is not.
If the object is rotated such that only triangles near to the origin are visible, the slider has to be at around 90% before the mesh ‘opens’.

So I want to have a uniform which is calculated flexible dependend on the viewing position.

I use the glm library for similar things, using glm::project() to calculate screen coordinates. It is not as quick as having the graphics card doing the job, but you don’t have to do the computation every frame.

When you have the screen coordinate for every pixel (ignoring those not visible), I think you can use the ‘z’ component to query the depth. It should go from 0 to 1, or it is outside of the frustum.

That way you get a depth value for every vertex, and simply find the smallest and the biggest of them.

There are two drawbacks with the this method. One is that it only considers vertices, not pixels. But it may be a good enough approximation.

The other drawback is that you have to find out what objects are inside the frustum. An object may be inside, even if all vertices of it are on the outside. The simple approximation here is to test if all vertices are outside on the same side of one of the 6 planes of the frustum. If so, you know for sure it is outside, even if this may not eliminate all objects.

Then, I just realised another solution: Draw all objects with a color representing the distance. You then read out the color of all pixels. It should be easy to define a shader that maps a distance to a color.

Hi Kopelrativ,
thx for your advices. I’m sorry that I still didn’t get your ideas…

Currently I only have one object which is of interest. Do you mean that I shall calculate the distance for each vertex of that object and encode this as a color? How could I do that? And even if I could do that. In my shader the closest vertex must be known before processing each vertex.

In your former post you were talking about the glm::project() method which gives me the vertex z-values, hence the depth values. You said I would not need to process this calculation on every render path…
Thats another point I didn’t get. On each rotation, another vertex would be the closest to my view position.

Although I still don’t know exactly how to proceed, thanks for your suggestions again. :eek:

To use color coding of distance, you have to modify the vertex shader and the fragment shader. The change should be conditional, and not active all the time.

Compute the position in the vertex shader:

out vec4 position;
position = gl_Position; // forward the screen coordinate to the fragment shader

Define a uniform flag in the fragment shader and an input for the position (computed in the vertex shader). When the object is rotated and you need to find out the new max and min distance, set this flag to “true”. Draw the object in the back buffer, bur don’t display it to the user (with the funny colors). When doing this drawing, your complete object should be included (zoomed out). Change your current computation of “fragcolor” into something as follows. You need to scale the color down with “X” to get it into the range 0-1.

uniform bool computing = false; // Only enabled when distance computation is activated.
in vec4 position; // the screen coordinates computed by the vertex shader
if (computing) {
    // The z of the screen coordinates ranges from 0-1 depending on the depth
    fragcolor = vec4(position.z/X, 0, 0, 0); // Encode distance to viewer into the red color
} else {
    fragcolor = // your current logic to create the color, maybe based on a texture

For each pixel of the screen, read out the red color using glReadPixels(). You should then get a value ranging from 0 to 255, representing the depth, as scaled by X/256. For debuging purpose when developing this algorithm, set “computing” to true all the time, and visually inspect that the colors indeed seem to reflect the viewing distance.

Using this method, you can find the smallest and biggest value of the red color, representing the smallest and biggest value of the distance to the viewer. From this, and the current zoom slider selection, you can either change the definition of the view frustum to include only pixels of the selected depth, or change the fragment shader so as to discard pixels that are outside of the desired depth (using another unirorm variable).

Thanks a lot. Now I understood your idea. I will try that asap.
You helped me a lot.

Thanks again.


Note that the min/max values can to be very speedly computed using the bounding box/sphere box too.
=> what is the type of object that you use ?

And/or use an 1D texture for to map the distance into the color you want (or discard it).
(after rereading this thread, Kopelrativ have already proposed this :slight_smile: )