The thing i want to do is a combination of “conventional rendering” (say render something into render target) and overlay raytraced image.
I have a raytracing kernel which outputs PBO which I can draw later as a texure, i also can generate depth for any point in my ray-traced image, so i can get depth buffer with floatind point depth values. Then i want to draw let’s say a sphere or box over my ray-traced image but i want my sphere or box to account for depth which my ray-traced image has. So the question is - how can i put my float point depth values to depth buffer (GL_DEPTH_COMPONENT render buffer)
Am I right assuming, that i can do something like this:
and then draw quad where in pixel shader output values from my duffer of depth values?
e.g. gl_FragColor = texture2D(ray_traced_depth, texCoord.xy).r;
The color outputs from fragment shaders are color outputs. That’s why it’s called [var]gl_FragColor[/var]. You write to the depth buffer using [var]gl_FragDepth[/var]. So do that.
You attach the depth buffer to the [var]GL_DEPTH_ATTACHMENT[/var], set the draw buffer to [var]GL_NONE[/var] (because you’re not writing any colors), and then render as normal. You only write to [var]gl_FragDepth[/var].
Though really, it’d probably be quicker to just upload your values directly to the depth buffer with glTexSubImage2D. Though it would help if you put your depth values in 24-bit unsigned integer format first, so that OpenGL doesn’t have to do the float-to-integer conversion. Alternatively, you could simply use [var]GL_DEPTH_COMPONENT32F[/var] as a format, so you can upload floating-point values directly without conversion.
Yeah it is actually gl_FragDepth
I agree, directly converting to 24bit int values could be faster.
Thanks very much for help.