I’m working in an OpenGL-based CAD
application code base that accomplishes smooth/fast redraw of a 3D object being manipulated by reading a snapshot of the background (color and depth buffers) prior to manipulation of the object; then instead of redrawing the entire scene as the object moves, we restore only the damaged part of the screen (from the saved snapshot; again, both color and depth buffers) and then draw the object in its new location.
We use OpenGL polygon offset to highlight faces of selected objects. The polygon offset correctly separates (visually; one color appears “on top”) the polygons when the scene is rendered from scratch. The color and depth buffers are saved for the purposes of the scheme above. My problem is this: when the damaged area is repaired (with glDrawPixels) using the saved color/depth buffer contents, the very same polygons that were rendered nicely originally now all “z-fight” with each other (I see stripes of the two fighting polygons’ colors).
Any suggestions? I have been trying to look into the following areas in search of a cause:
- Am I losing precision of the depth buffer values either on the read or write of the snapshot?
- I have noticed that some machines/graphics cards don’t display this behavior. Is it hardware related?
- Am I using bad values for polygon offset in the first place? I am using a simple empirically-determined constant; seems it should be calculated based on depth buffer precision and near/far plane distances. Are there other factors too?
Don’t feel obligated to respond to all of my questions, any suggestions appreciated.