Z-fighting: stripes

I’m working in an OpenGL-based CAD
application code base that accomplishes smooth/fast redraw of a 3D object being manipulated by reading a snapshot of the background (color and depth buffers) prior to manipulation of the object; then instead of redrawing the entire scene as the object moves, we restore only the damaged part of the screen (from the saved snapshot; again, both color and depth buffers) and then draw the object in its new location.

We use OpenGL polygon offset to highlight faces of selected objects. The polygon offset correctly separates (visually; one color appears “on top”) the polygons when the scene is rendered from scratch. The color and depth buffers are saved for the purposes of the scheme above. My problem is this: when the damaged area is repaired (with glDrawPixels) using the saved color/depth buffer contents, the very same polygons that were rendered nicely originally now all “z-fight” with each other (I see stripes of the two fighting polygons’ colors).

Any suggestions? I have been trying to look into the following areas in search of a cause:

  • Am I losing precision of the depth buffer values either on the read or write of the snapshot?
  • I have noticed that some machines/graphics cards don’t display this behavior. Is it hardware related?
  • Am I using bad values for polygon offset in the first place? I am using a simple empirically-determined constant; seems it should be calculated based on depth buffer precision and near/far plane distances. Are there other factors too?

Don’t feel obligated to respond to all of my questions, any suggestions appreciated.

Thanks!
Rick

Technically, if you use glDrawPixels, if the same format is used for the pixel, you shouldn’t see any z-fighting because you just copied colors(that were right before) and this doesn’t do any z-buffer check. And since you said it is working on some cards and not other, I believe the problem is in the type/format you use to read and write pixels

I would compare those with the one you grabbed have in your pixel format. I don’t know if you know that, but the pixelformat can be different than the one you requested. This varies from card to card.

[This message has been edited by Gorg (edited 08-11-2000).]

Depending on the destination format you use for the depth data, you probably have some conversion going on. Normally the depth buffer has a bit-depth of 16, 24, or 32 bits.
My suggestion would be not to use floating point representatiosns for intermediate storage, but unsigned int (32 bit), so that only bit shifting is used during the conversion, if at all.

Even faster: If you don’t need the data for other purposes than save and restore, have a look at the definition of GL_KTX_buffer_region, which was designed exactly for this purpose. The data will be stored onboard as long as there is enough memory left on the adaptor.