currently, I’m solving the problem of drawing the non-occluded triangle edges of a mesh by drawing all the triangles with polygon offset enabled (both factor and units >0) and then drawing all the triangle’s edges as lines with depth testing enabled.
This method conflicts with some post-processing that uses depth values as input, like silhouette-edges* detection.
*Silhouette-edge → an edge shared by a front-facing and a back-facing triangle (relative to the view direction).
When polygon offset is active, additional depth discontinuities introduced by the slope-scaled bias are detected as silhouettes.
Is there a way to draw non-occluded triangle edges without enabling polygon offset?
Yes, I’m drawing the wireframes with deph testing and writing enabled, to draw transparent geometry with blending and depth testing (no write), this way transparent polygons are not drawn on top of their opaque edges.
However, the problem is the polygon offset I use when drawing opaque triangles: I’d need a “clean” depth buffer for edge detection, slope scaled depth offset introduces some discontinuities that conflicts with edge detection. So I was wondering if there was a way to draw wireframes without the need to offset polygons depths.
You can simulate wireframe by drawing the triangles with a fragment shader which only draws the edges (setting either alpha or coverage mask to zero for non-edge fragments). That has the advantage that the depth values will be identical, so when using GL_LEQUAL as the depth function the fragments will pass without any depth fighting.
Given barycentric coordinates, you can use dFdx/dFdy to calculate the distance in pixels of the sample point from the triangle edge, so the lines have a consistent thickness.
But first I’d try drawing the scene without the wireframe, calculating the silhouette from that, then adding the wireframe afterwards.
Would that be different from drawing the same triangles with GL_LINE rasterization instead of GL_FILL?
The problem is that with the method I’m using (polygon with depth offset) the depth buffer after polygons but before wireframes has some additional discontinuities introduced by the slope scaled depth offset applied to polygons (which results in false positives found by the edge detection filter).
The method you suggested should work for me since there would be no need to apply a polygon offset in the first place.
I’m using glPolygonOffset to offset polygons depths, not a different projection or depth range. This works perfectly for drawing edges on top of triangles, since the offset is scaled depending on the slope of the triangle by the API. Offsetting wireframes wouldn’t take into account the different slopes of the triangles that share an edge.
This method works well for drawing edges over triangles but introduces a lot of artifacts for the edge detection pass.
Over in the COLLADA forum I’m trying to create a similar wireframe. It appears that my frame has to be built separately from the surfaces. This OpenGL discussion is helpful - if anyone has any further ideas, please let me know.