numerical precision

This is fairly fundamental, so I had posted it in the beginners forum but perhaps someone here can clear it up.

Given that, fancy new extensions excluded, according to the specs all pixel values are clamped to [0-1], what about internal computations using textures?

For example, say several textures are specified of type GL_FLOAT, contaning values from some arbitrary computation.
A fragment program is then used to combine values looked up from the textures, the output being written to the framebuffer, clamped to [0-1] as usual.
Provided that we’re not upset about the final values being clamped, what kind of precision can we rely on internal to the fragment processor, when the textures are combined?

From the descriptions of the pipeline in the redbook, I gather that all data which is unpacked from a texture is clamped to the range [0-1], with 8 bits of precision to the right of the binary point ( assuming that we asked for RGBA8, and we’re not using a float buffer ) Is this correct? I realise the newer hardware is supporting float buffers, but I want to test the limits of the current hardware first.

Any help appreciated.


The precision is not defined. The answer is it varies with the hardware. Intermediate results are clamped between 0-1. However often you have 9 bits of precision or more even on some hardware that is a couple of years old. Hardware provides some of those fancy not so new extensions to allow you to fake extended range by doing things like scale or divide by two on the way in and out of texture units and other related tricks. Failing this you can write very portable vanilla code to simply keep the numbers on the way in low and scale by two on the way out with a blendfunction that contributes the source fragment twice to the destination buffer. That trick is a bit dated but if you’re after vanilla code perhaps it’s what you want. The point is you can trade some of your precision (which you may have more than you assume) for some crude extended range ability. You don’t have to lose texture precision for this scale on the way in either because if you’re using a modulate texture operation you can scale the texture contribution down without losing the original precision.

[This message has been edited by dorbie (edited 04-02-2003).]

I’m still somewhat confused about the varying levels of precision available at different points in the pipeline,
as an example, say I want to store the results of a function with the output range 0-1 in a texture.

If I specify the texture data as GL_FLOAT, does that have any effect at all on the amount of precision I might get?

I note that I can specify the internal format of the texture as “GL_RGB16”, but I assume that I can only get that resolution if the hardware supports it.

Is there any way I can query the precision levels of the various GL formats on my hardware?

Thanks again,

There’s no benefit from specifying data externally in higher precision than the hardware supports, unless perhaps you intend to apply color space transformations during the upload that might benefit (this is rare), but the internal format does matter if it is supported. It may slow you down.

You can querry internal precision with this call using the following tokens:



Querrying the internal format might also return what you actually got but I’d be less certain of that.

Note that the internal format of a texture does not tell you the precision of the texture units that perform the arithmetic. It is desirable to have more precision in the arithmetic and intermediate storage than in the texture representation and implementations do this for smaller formats. Other implementations that support larger formats can lose precision in the arithmetic, there are no hard and fast rules, it’s really hardware specific and knowledge based.

[This message has been edited by dorbie (edited 04-02-2003).]