Feeding Opengl a Z-buffer?

Is it possible to have a “saved” Z-buffer array, and then feed that value to opengl to use for each frame? I want to create a “backdrop” style game like “Resident Evil, or Myst”. Is it possible to do this, and then render other objects as normal, checking against the “feed in” z-buffer?

Yes, you can set the z-buffer using:

glDrawPixels(width, height, GL_DEPTH_COMPONENT, GL_FLOAT, myPtrToASavedZBufferAsFloats);

glDrawPixels with GL_FLOAT will probably not be fast enough. If you want glDrawPixels at all save and restore it as GL_UNSIGNED_INT.
Much(!) faster is to draw it once and save and restore it with the WGL_ARB_buffer_region extension.

Ok, that last post slightly confused me. Are you saying, I can load my array into the Z-buffer in memory for frame #1, and then simply restore said buffer for frames #2-#n? Or am I WAY off of that understanding?

Also, what is the difference between using ints vs floats for the depth buffer, I thought the depth buffer was ALL float values?

Yes that is what is being said.

The depth buffer only pretends to be a float value. Same as color values in OpenGL, the actual hardware uses a hidden internal representation.

Ok, after reading that last post, I am now a little worried. Say I create a “grand” scene in blender3D, or lightwave, etc. Now I render it using said program and save that render to a texture file. Now I find a way to either save, or export the Z-buffer built of that image, from said program.

I take the rendered image, and load it as a texture on a backdrop quad, and render the quad. I then overwrite the current OpenGL depth buffer, with the depth buffer I exported from my program (I am attempting to create a backdrop, that objects can be rendered “in” kind of like how resident evil, although it is a backdrop image, you can walk in front of some stuff, etc). Now, when I attempt to render any “new” objects in my scene, will they work with the current depth buffer? OR, will there be problems? I was figuring as long as I used the same “unit” dimentions, frustum, etc, there wouldnt, but now I am unsure. I dont want to end up with somethings being placed and rendered correcly, and others showing halfway through stuff.

If you have the same setup for projection and modelview matrices and depthrange, you can do this (pseudo code):

// Init backdrop (slow)
glDrawPixels(GL_RGBA) // fill color
glDrawPixels(GL_DEPTH_COMPONENT) // depth buffer 
wglSaveBufferRegion(whole_window) // depth and color
for (animation)
{
  draw_moving_objects()
  SwapBuffers()
  wglRestoreBufferRegion() // either the whole window or if your pixelformat 
  // is PFD_SWAP_COPY and you know where you drew the objects, 
  // only those rects
}

I’m on a Mac, so I can’t use WGL_ARB_buffer_region. Is there any other way to restore a depth buffer that would be faster than the glReadPixels/glDrawPixels route?

Those posts were written over 3 years ago. Things have changed.

The correct way to save the depth buffer nowadays is to use a Renderbuffer for the depth buffer. You can use the EXT_framebuffer_blit extension to copy from/to renderbuffers.

None of the computers I have access to support EXT_framebuffer_blit, so I guess things haven’t changed enough. Thanks anyway.

None of the computers I have access to support EXT_framebuffer_blit, so I guess things haven’t changed enough.
Do you not have current drivers? Or are they ATi hardware?

Originally posted by Korval:
Do you not have current drivers? Or are they ATi hardware?
On Macs, you can’t update drivers independent of the OS, as far as I know. One is a G5 with a GeForce 5200; another is a mini with Intel graphics, and a third is a laptop with some ATI card.

It was suggested elsewhere that I look at ARB_pixel_buffer_object, which is available on my Macs.

You can use glCopyPixels and an aux buffer. Use kCGLPFAAuxDepthStencil on the Mac to make each aux buffer retain its own depth buffer.

Originally posted by arekkusu:
You can use glCopyPixels and an aux buffer. Use kCGLPFAAuxDepthStencil on the Mac to make each aux buffer retain its own depth buffer.
Using an auxiliary buffer makes sense, but the documentation is confusing… for kCGLPFAAuxDepthStencil, it says “If true, each auxiliary buffer has its own depth stencil.” I know what a depth buffer is, and I know what a stencil buffer is, but what’s a depth stencil?

Actually I’m using the AGL API, and the documentation for AGL_AUX_DEPTH_STENCIL is even less clear, not specifying whether the associated value is supposed to be a boolean or a number or what. There was a post on the Mac OpenGL mailing list a year ago, asking how to use AGL_AUX_DEPTH_STENCIL, and nobody answered it.