I’m working on a volume renderer. If you dont know what that is, thats ok, the important thing to know is that it involves big 3d textures.
Because the textures are so big (256^3 x 8bits), its important that I be able to use 8 bit, single channel internal formats to store the textures, to save memory.
The fragment program goes like this: sample 3d texture to get a single value. Use this value as index into a 1d RGBA texture to get the result color.
In theory, one should be able to get more than 8 bits of precision out of the 8bit texture, because of linear interpolation. Like, if one texel has 63 and the next has 64, the fragments inbetween should get values 63.1, 63.2, 63.3 etc. In theory…
In practice the precision is not that great. I don’t know exactly what it is but it looks bad. Lots of aliasing. If I use a 16 bit internal format for the 3d texture, then the results look good. That makes sense, but check this out: If I use the 8bit value as a look up into a 1d, 16bit texture, with 256 entries, where each entry is just the 8-to-16bit conversion, and then use this 16bit value to look up into my color texture, then the results look the same as the 8bit results, even though I’m using the output of a 16bit texture as the input to my dependent texture.
Can anyone shed some light on this? This happens on all geforce fx cards I’ve used. 5200, 5900 and quadros. Is there a way to up-sample the values without wasting half of my video memory?