fp exposure adjustment logistics

Hi everyone,

I’m beginning to create a real time exposure adjustment effect in OpenGL, and before I hit my head on too many walls I thought I’d ask for some general advice from this board.

My situation basically is this - I have an fp16 texture to which my scene is being rendered through a framebuffer object. It is as you would expect large (same as screen resolution), and not power of two. I can re-render this to the color buffer with any exposure function I like using a screen aligned quad. So far, so good.

The problem is that for an effect simulating a human eye’s response to brightness changes, I need some value as input that is the current brightness of the scene, or even better yet some kind of short-term running average to serve as sort of a ‘history’ of brightness to provide a time delayed response to brightness changes.

Now, it would seem that if I could just mipmap my render target, the 1x1 mipmap level would provide exactly that value - the average intensity of all the texels in the full res image. So I’ve tried this through the glGenerateMipmapEXT call; and it brings the app to a grinding crawl and doesnt even seem to work. (geforce 7900, newish drivers).

I’ve considered manual ‘mipmapping’, where I use a convolution shader and several render passes over quads to shrink this image into a 1x1 block which I could use GPU side as input to exposure adjustment. My recollection is that this technique is generally considered the way to go for this sort of thing in floating point - am I missing anything here? I could even cut some corners for performance as long as largish features of my original image aren’t missed in the downsampling.

As sort of a secondary question, would it be a good idea / feasible to read back this brightness value onto the CPU somehow, through the use of PBO’s or the like? If latency becomes a problem, I think I could deal with a frame of lag or so on the value. I’m assuming glGetTexImage is too slow :stuck_out_tongue:

If anyone here has any advice or related experience on this matter, I’d be quite happy to hear it!

Thanks

Not sure about the FP buffer thing.

But I was just curious if there was any need to read the 1x1 mipmap level back to the CPU.

I would just sample the texture in the shader you are using to perform the exposure. (and do appropiate calculations)

For the decimation side of things I’ve had good results with anisotropic filters. They can get you to width/4 x height/4 in 2 passes, say; then you can follow up with a 3x3 filter to get your image down to 64x64; use a 4x4 filter to take that down to 1x1, for the final log-average luminance and exp.

Now you can bind your 1x1 texture for use in you adaptive luminance exposure, along with another 1x1 texture that alternately serves as the source and destination (for smooth adaptation).

As for glGetTexImage - I prefer it over glReadPixels - you can get free asynchronous read on NVIDIA drivers (perhaps on ATI, too - I haven’t tried).

With both glReadPixels and glGetTexImage you must wait until rendering is finished but with glGetTexImage you can bind to another FBO and start rendering something. Later you call glGetTexImage and you get the texture - driver will transfer the data even if GPU hasn’t finished rendering.

Of course there are better ways (PBO), but glGetTexImage is backward compatible down to OpenGL 1.1 and it doesn’t introduce any noticeable cost when implemented properly.

Yeah, what Sqrt[-1] suggests is the way to go. I’ll just add that you can have a 1x1 texture where you keep averaged luminance from n frames and simply blend current frame’s luminance on top of this value.

Yes, this idea of blending onto another 1x1 texture over time is a good one; that’ll keep me from having to transfer to CPU.

Leghorn, what exactly do you mean by an anisotropic filter? Sounds interesting but I’m not quite sure what that means. Are we just talking standard 2x2 box filter to get down to w/4 x h/4 in two passes?

Thanks all

The fp16 format probably has no hardware support for accelerated mipmap generation. Render to an accelerated format or just use your final RGBA8 output color buffer (which will introduce one field latency).