Solid + Wireframe

I have a view where a 3D model is rendered in solid wire frame mode. I achieve this by rendering in solid mode first then followed by wire frame mode. The lines are not continuous with flickering artifact…This is due to Z-fighting even if I the depth compare is less_or_equal. Tried the poly offset but did not work well since it has other side effect on the rendering.

Stencil approach does not work with concave/complex polyhedral geometry, as there will be front facing polys but still culled.

Offsetting the vertices by their normals when rendering wire frame is an option but first I want to make sure there’s no shader based technique which is simpler and more accurate.

Help appreciated.

What’s the problem with poly offset? It should work fine for you.

How to calc the best offset values?

I’d recommend “1,1” :slight_smile:

But does this apply to polygons or rendering lines? I’m rendering wireframe as sequence of lines.

If it doesn’t apply to lines, you could always negatively offset the polygons.

AFAIK, setting glEnable(GL_POLYGON_OFFSET_LINE) with glPolygonOffset(1.0,1.0) will give you a good starting point.

You might take a look at the SIGGRAPH “sketch” (short paper) from 2006 called “Single-pass Wireframe Rendering” [Google it!]. It appears to be the “perfect” way of doing wireframe over solid rendering. It uses both a vertex and fragment shader. Optionally it is also possible to use a geometry shader to speed things up and handle a few nasty exceptional cases. NVIDIA have a demo that demonstrates the geometry shader approach.

The idea is to compute each fragment’s distance from the triangle edge. Distances are computed in viewport (screen) space at the vertices in the vertex shader, and are then interpolated in the rasterization stage for each fragment, i.e. most of the work is done in the vertex shader. In the simple implementation (without geometry shader) all triangle vertices must be passed for each vertex. This is because the distance from the current vertex in viewport space to the edges of the “current” triangle must be computed in viewport space, which involves projecting the triangle into viewport space so that the triangle edges appear as lines. Triangle vertices can be passed as (multi-)texture coordinates of each vertex.

The distance to the closest edge is then used to blend the wireframe nicely onto the solid rendering, and it is done in a single pass!

You may wish to consult the NVIDIA implementation and the original paper for a better explanation.

Another thing you can do to find edges is compare centroid varying with a std varying in your shader, and where they differ you’ve got an edge pixel (Engel credits Nicolas Thibieroz for this). Humus also uses a technique (in DX) with frag( SV_Position ) to find edges which doesn’t require the varyings, but probably doesn’t work with SSAA. Both are describe here. Haven’t actually tried these, but they could potentially let you do single-pass wireframe fill rendering.

I tried the cel-shading technique where the rendering is done in two passes for edge detection. It works great, but edges are view dependent. Is there any tweak to make these edges always rendered regardless of viewing angle?

The method that requires setting line thick to some value > 1 is not compatible wit the new spec. since this feature will be dropped.


Well in a nutshell, I’m looking into the same effect as google sketch-up rendering mode.

Just a quick set of links for that, seeing as it in my opinion still has way too few implementations for it’s simplicity :slight_smile:

Siggraph sketch:
Somewhat hidden paper later published:
Nvidia implementation: