High precision single component render to texture?


I’m currently working on a project that graphs data samples in opengl and then saves or displays the results. The images being generated are gray scale and the file format used when saving requires one 16 or 32 bit value for each pixel.


How can I render high precision data to the frame, pbuffer, or texture? Is there a way to render to texture using one of the 16 or 32 bit alpha/luminance texture formats? Are these even hardware accelerated?

I know that float buffers are an option, however we’re using a lot of older hardware (GF3-GF4).

Also I don’t mind pulling back RGB data and filtering out the extra components, so long as it works I’ll be happy. :slight_smile:


I also want to know how to do it. Could anyone can help us?

Geforce 3 has an internal precision of just 9 bits per channel AFAIK (8 bits fraction plus sign). Everything that passes through pixel processing will be limited by this precision constraint. So even if you could render to an ALPHA16 texture (which I don’t know), you wouldn’t get higher precision than 8 bit.

You could theoretically do some compositing with basic multitexturing where you use two textures in lieu of vertex colors. One texture with GL_REPEAT as LSBs and another texture with 1:256 scaled down texture coords as MSBs (you lose any “real” texturing capability in the process because you can’t guarantee proper wraparound). This, again, doesn’t directly work on Geforce 3 class hardware because of the lack of internal “combiner” precision.

But you can do it in a multipass rendering approach. Render the MSBs first, read them back, then render the LSBs, read them back again. Then combine the two buffers to a single 16 bit per channel image on the CPU. You could also throw in a third pass for “real” textures and blend the result to your high precision colors (again, on the CPU).