I am generating an image in an offscreen buffer and caching it for later renders. I currently use glDrawPixels to render the RGBA and depth values into the framebuffer. I would like to be able to change the depth coordinates of a quad using textures. To my dissapointment, GL_ARB_depth_texture does not appear to do this. I do not have access to shaders. Here is the rough outline of the desired code:
// 1. Render highly detailed scene to an offscreen buffer
// 2. Cache RGBA data from buffer into a 2D texture
// 3. Cache depth data from buffer into a 2D texture
// 4. In fast mode, draw a quad with both textures so that the resulting image is the same as the buffer (including depth)
Any help is very much appreciated.
As far as I know, there is no way to modify depth during rasterisation without using a shader, so it will have to be a glDrawPixels-like approach. If you have GL_EXT_framebuffer_blit it might be faster than glDrawPixels since it bypasses a lot of stuff.
If that’s of any help, then you can use alpha channel of color texture to store depth. This can be done using linear texgen and simple 256x1 texture with GL_NEAREST filtering.
You can then use alpha test in combination with depth test.
Note that reference value for alpha test is constant, so to get src alpha to dst alpha comparison you should subtract alpha channels of two textures - one is your buffer and the other is 1d texture linearily mapped to drawn object.
To be backward compatible you can use GL_ADD env mode and negate one of these textures (simply use reversed texgen on that 1d texture). Most backward compatible way of adding alpha channels of two textures is to use one texture with alpha channel and the the other in GL_INTENSITY format.
If added value is equal to 1.0 then it means that depth values are equal (remember that one of textures is reversed).
I’ve managed to implement shadowmaps this way, they’re 8-bit, but work on any hardware supporting mutitexturing and GL_ADD, like RIVA TNT2