The output is rendered in a texture bound as color attachment to the current fbo.
Later, i want to get the two packed tex coords with reading the previous written texture.
unpack code in a different fragment shader:
vec4 temp = textureCube(CubeMap, WorldSpaceLightVec);
const vec2 weights = vec2( 8192.0, 8.0 );
temp *= weights.xyxy; // unpack [.rg -> u coord, .ba -> v coord ]
vec3 texCoord;
texCoord.x = temp.x + temp.y;
texCoord.y = temp.z + temp.w;
I need to use GL_RGBA16F_ARB internal format instead of GL_RGBA32F_ARB, in order to allow linear interpolation.
If someone has an idea why it’s not working or how to do such thing, please help me.
Yes, you’re right, but I found this solution in slides on the mad mod mike demo from nvidia and I thought that it was a trick using half and float knowledge that I didn’t understand. After studying the bit structure of half and float, it still doesn’t make much sense to me.
May be 16bit floating point is enough has it is for values between 0 and 1. Or i’m just losing time with using float format, should it be better to use integer internal format to store texture coords? at least using integer format would allow more compatibility with older cards.
May be 16bit floating point is enough has it is for values between 0 and 1.
Honestly, if the values are only between 0 and 1, you shouldn’t be using a 16-bit float texture. A 16-bit float texture only gives 10 bits of mantissa, so you only get 10 bits of [0,1] information per channel; you get 8 with a regular 32-bit RGBA texture. If you really need extra precision beyond 8-bits for a [0,1] range (and it’s fairly rare that you do need the extra precision for image textures), you should use a 32-bit float texture.
Korval
You’re not absolutely correct about 10 bits of precision for numbers between 0 and 1.
Yes, half type has only 10 bits of mantissa, but some of exponent bits also “belong to” numbers less than 1.