In trying to write some really nice Cg effects we have run into a few problems concerning reading the values written into buffers (especially the depth buffer) in previous passes.
One of the simpler things we want to achieve is to render the scene in pass 0 and then find the minimum and maximum value of the entire depth buffer to be able to use these in pass 1. There must be a more effecient way than using readPixels and then iterate over the values (which turns fps into spf).
The second problem arises when we for every fragment want to read the corresponding depth value written in the preceeding pass to be able to estimate the thickness of a geometric body. We have no clue how to achieve this goal.
Any hints or clues will be greatly appreciated.
To get the min/max, you need histogram support, which is not typically hardware accelerated. (Actually, I’m unsure whether histogram gives you depth data at all)
To get data from the previous pass, you have to bind the buffer as a texture. Look into the various render texture extensions, and the depth compare / shadow extensions.
Create a p-buffer
clear it to some value
enable blending, glBlendEquation(GL_MIN);
render everything while using a frag_shader and write depth value to red channel.
enable blending, glBlendEquation(GL_MAX);
render everything while using a frag_shader and write depth value to green channel.
then do what you have to with that.
Thickness of the body eh? What effect are you trying?
It’s going to be an approximation of Beer’s law. We’re going to render the scene with the back depth values of the visible geometry and then render it again normally using wave-length dependent attenuation which needs to know the depth of the geometry it passes through. So every fragment needs to know the back depth value.
We’re also creating an effect were diffuse color depends on depth. It looks nice but we need to adapt the scale depending on the geometry in the scene. For this reason we need the total min and max values in the depth buffer.
interesting idea - but for glBlendEquation GL_ARB_IMAGING has to be present in hardware (as for histograms).
you are right, GL_BLEND_MINMAX is an extension of its own too (with glBlendEquationEXT as entry point).
But glBlendEquation is also a sub part of GL_ARB_IMAGING (hence the version without ARB or EXT, I believe ARB_IMAGING was the first ARB extension and not named very clearly):
GL_ARB_imaging (Imaging Subset)
Complete imaging subset providing: Color Tables, Convolution, Color Matrix, Histogram, Constant Blend Color, Blend Subtract and Blend Min/Max.
The Beer’s law effect is finished and the beautiful result can be seen in the following pictures:
A huge thanks for the quick replies.
Nice to see the the results.
What do you do if you have a part of an object that hides itself, like the case of the torus is obvious. You would need a multipass method resembling OIT.
We’re currently not taking that into account at all and can thus get away with two passes. Adding more passes and applying some careful thinking might improve the result in that regard though.