Trouble packing depth into color texture

I’m implementing omnidirectional shadow mapping using cubemaps. Since I can’t create a depth cubemap on my hardware ( Macbook Pro, ATI x1600 ) I’m packing fragment depth into the cubemap’s RGBA color channel. This seems to be pretty common, and googling for how to do this swizzling got me a lot of discussions on best approaches. I’m using the approach detailed in this thread, because it makes sense ( no magic! ):

http://www.gamedev.net/community/forums/topic.asp?topic_id=486847

Here’s my code for packing depth to color and back. (Note, my depths are normalized [0,1])


#define DEBUG_PACKING 0

vec4 FloatToFixed( in float depth )
{
    #if DEBUG_PACKING
        return vec4( depth, depth,depth,1 );
    #else

        const float toFixed = 255.0/256.0;
    
        return vec4( 
            fract(depth*toFixed*1.0),
            fract(depth*toFixed*255.0),
            fract(depth*toFixed*255.0*255.0),
            fract(depth*toFixed*255.0*255.0*255.0)
        );
    #endif

}

float FixedToFloat( in vec4 shadowSample )
{
    #if DEBUG_PACKING
        return shadowSample.r;
    #else

        const float fromFixed = 256.0/255.0;
        return shadowSample.r*fromFixed/(1.0) +
               shadowSample.g*fromFixed/(255.0) +
               shadowSample.b*fromFixed/(255.0*255.0) +
               shadowSample.a*fromFixed/(255.0*255.0*255.0);    
    #endif

}

When DEBUG_PACKING is zero ( in principle, 32-bit depth precision), I get the following junk:

However, when storing depth using only 8-bits ( the red channel ) I get good output:

Since the 8-bit low-precision version renders correctly, I can infer that the rest of my omni shadow pipeline is correct ( or at least, not broken too badly ).

So, can anybody tell me what’s wrong with the 32-bit precision version? I’m a little baffled…

As a minor update, I dropped down to 24 bits of precision – dropping alpha from the packer – and the results are correct now. So, clearly the alpha channel’s the culprit, though I couldn’t say why.

I can live with 24 bits of precision! But if anybody might have some suggestions as to why alpha’s not usable in this context I’d love to learn.

Couple ideas: when you were rendering 32-bits digital depth to the color buffer (including alpha), did you disable ALPHA_TEST and BLEND? Don’t want any funny business going on to alter your fragment shader output values. You’ll also want to disable MSAA/SSAA (often aka FSAA) if enabled, because again, you don’t want anything to go mucking with your color values after the frag shader.

That’s exactly what I was doing incorrectly! Thanks,