In my GLSL shader I compute a depth value in [0-1]. I want to output this value to two channels of a RGBA16f texture, in order to keep the full 32-bits precision.

My initial (naive) approach was to do this:

/// Packing a [0-1] float value into a 2D vector where each component will be a 16-bits float
vec2 packFloatToVec2f(const float value)
{
const vec2 bitSh = vec2(256.0 * 256.0, 1.0);
const vec2 bitMsk = vec2(0.0, 1.0 / (256.0 * 256.0));
vec2 res = fract(value * bitSh);
res -= res.xx * bitMsk;
/// res.x = frac(value * 256^2)
/// res.y = frac(value * 1.0) - frac(value * 256.0^2) / (256.0^2)
return res;
}
/// Unpacking a [0-1] float value from a 2D vector where each component was a 16-bits float
float unpackFloatFromVec2f(const vec2 value)
{
const vec2 bitSh = vec2(1.0 / (256.0 * 256.0), 1.0);
return(dot(value, bitSh));
}

It does not work, because I think unlike RGBA8 fixed point, the floating point format isn’t uniform in the [0-1] range, but dependant on the IEEE floating-point representation. So I get all sorts of artifacts and loss of accuracy.

Is there a way to pack a float32 into two channels of a RGBA16f texture in GLSL ?

Yes, and that is what my code should do (I hope - my knowledge of floating-point is very limited) The higher 10 bits of the fp32’s mantissa are stored as floor part in the first fp16 value, the following 10 bits are stored in the second fp16. Still, you may loose 3 mantissa bits (as you can’t pack 23 bits in 20)

Thanks. I don’t mind loosing a few bits of mantissa, but I definately want to have more precision than what I’d get with a single fp16.

I tested your code and apparently it seems to work fine, so I’ll consider the problem solved. I feel a bit uneasy using floor/frac to extract the mantissa, but integer/bitshifts/mask operations aren’t available (I’m not willing to go to shader model 4) so I guess it’s the only solution.