I’d like to determine the visibility of each of about 1000 vertices
that are part of a mesh rendered in the scene. Performance
is not the highest concern, and I’d like the result to work
on both ATI/Nvidia if possible.
So far I’ve considered the following approaches:
- gluProject each vertex,
- read it’s Z from the depth buffer,
- use gluUnProject to get the world space Z (* is this right?)
- compare to the original Z of that vertex.
I implemented this approach, and find it does not work all that well:
the unprojected Z is surprisingly far from the original.
I guess that is due to the quantization down to a single sample
in the Z buffer… but basically I think it is not accurate enough
to judge occlusion, or else (more likely!) there is something
wrong with what I’m doing.
- Use selection.
- For each point do the selection thing: set a viewport around the
point, render, look at the Z’s of objects that show up.
I fear this will be slow and wonder if it will have the issues of approach #1,
i.e., that the point will not necessarily be the closest hit, and
the depths will be too inaccurate to make a simple judgement?
- Simple approach:
- render the mesh in e.g. Blue
- set depth test to LEQUAL
- render the points in Green (also project them and record their screen x,y)
- read the image back
- look at the color of each of the points.
This will not work because even with LEQUAL, points are often occluded.
- Put a small bounding box around each point and use one of the
Would this be expected to work? Drawback is that it would not be widely
supported yet, and I’d still need a fallback approach in case the
card does not support it.
So, experts! What’s the best solution here? Is there any variation
of #3 that could work?