Packing 3 Colour Channels Into 2

Is it possible to pack 3 colour channels into 2 and unpack them using GLSL?
I realise I’ll probably lose some precision along the way, but this may not be an issue.



If your card supports supports bitwise operators, it’s easy to make your own pack/unpack scheme using them, e.g. 565 for 2 8bit components, etc. If it doesn’t, I don’t know.

Although you must do any interpolation yourself afterwards.

Hi babis,

thanks for getting back to me.
Let’s assume I won’t be able to use bitwise operators (unfortunately).


No prob,

I guess then you have to emulate the bitshifts.

k << x = k * (2^x)
k >> x = trunc(k * (1 / (2^x)))

Since you’ll know x, you can compute these & have them as constants, so it’s one multiplication.

The above are if you have your color as integral value (e.g. convert 0.0-1.0 -> 0-255, pack, convert back to 0.0-1.0).
If you have floats, well, I don’t remember the bitwise stuff for floats :slight_smile:

So it’s feasible without the bitwise, probably a bit more expensive though.

Hi again babis,

I’m afraid I’m not really familiar with bitwise operations (I’m a relative coding newbie, I’m afraid). What I can’t quite get my head around is how to store 2 (or 3) discrete values in the same floating-point value. You couldn’t possibly post a GLSL snippet to pack, say, the Red and Green components of a vec4 into just the red channel, and then extract the values again, could you? I’m using float values for my colours, incidentally.

Sorry for being a bit dense…


I’ll post later when I’m home, it’s not exactly THE solution you’re’ searching for, but using the above & the code you can figure out what you need to do. Or ask further questions :slight_smile:

You didn’t specify, what format are you using for your colorpack-texture? 32bit RGBA ubyte?

Hope I didn’t make any stupid mistake, here we go…

vec2 toRGB565(in vec3 c)
        // Convert from [0,1] to [0,31] - 32 possible values for 
        // 5 bits (R & G components)
	ivec2 outcInt = ivec2(c.rb * 31.0);

        // Convert from [0,1] to [0,64] - 64 possible values for 
        // 6 bits (B component)
	int green = int(c.g*63.0);

            Target bits : 
        // In x component, keep the low 3 bits of green
        // In y component, keep the high 3 bits of green & 
        // move them to become the low 3 bits
        ivec2 LOHI = ivec2(green & 7,green >> 3); //-> bitwise version
        ivec2 LOHI = ivec2(mod(green,8), green / pow(2,3)); //-> your version
        // move both by 5 bits so that they lie in the final
        // 3 bits of each component
	LOHI <<= ivec2(5); //->bitwise version
        LOHI = LOHI * pow(2,5);
        // OR it now with the r/g components
        // RG is limited to 5 bits so we have no overlap
        // Divide by 255 to rearrange it to [0,1]

	return (vec2(outcInt | LOHI) / 255.0); //-> bitwise version
        return (vec2(outcInt + LOHI) / 255.0); //-> your version, it is ok since we have NO overlap in the values

vec3 fromRGB565(in vec2 c)
        // inverse documentation & non-bitwise left as homework :)
	vec3 outc;
	ivec2 cInt = ivec2(c*255.0);
	ivec2 cIntMod = cInt & 31;		
	outc.rb = vec2(cIntMod) / 31.0;	
	ivec2 gComps = cInt>>5;
	outc.g = float(gComps.x | (gComps.y<<3)) / 63.0;
	return outc;

Of course you can optimize it :slight_smile:

The above works for 32bit ubytes. You said you use floats, what floats exactly? & in which range? Or is it dynamic? If you need to pack, you need to know how many bits you have available.

Ah great, thanks for that babis. I’ll give that a go. I’m using an 8-bit-channel RGBA texture, with all values in the float 0 > 1 range, so it should work fine, I guess.

I will give it a try and let you know how it goes.

Thanks again,


I am just curious, what is the point in storing here 3 color channels in 2 channels, knowing that OpenGL always assembles color values in RGBA color element? You want to store another information in the blue and alpha channels, or maybe, store 2 textures in one?

Hi dletozeun!

I’m trying to store scene depth information in the B channel, preserving the alpha in the A, and packing the other 3 colours into the R and G channels.

I’d use multiple rendering targets if I had the option, but I don’t.


OK I see, but you will loss a lot precision storing depth in only 8bits. If your hardware can support 16 bits or 32 bits floating points (GL_RGBA16F_ARB and GL_RGBA32F_ARB internal formats) you could store your original Red and Green channels in the red 16bits floating point texture color channel and do the same in the green channel to store blue and alpha channels.
Then if you create a RGB 16bits fp texture you can store depth in the last 16bits channel (Blue). With a RGBA 16 bits fp texture you could even decompose depth in Blue and alpha channel to keep a 32 bits precison for depth! :slight_smile:

I would totally agree. I actually used the packing at some point for the same thing : RGBA + depth in 4 channels. But, as said, the loss of bits sucked.

So, just to clarify what dletozeun said :

R16 -> R8 & G8
G16 -> B8 & A8
B16 -> Depth16(as-is) or Depth32Lo
A16 -> Free! or Depth32Hi

Ah… I will look into 16/32-bit rendering. I’m a bit limited by Quartz Composer, but it may be possible to render at higher depths. 32 bits might be out, as I can’t use my usual brute-force supersampling technique to smooth edges (since there’s no filtering of 32-bit images).

Looks like 16bt/ch could be the way to go though, if I can get it to work…

I will run some experiments, and maybe come back to you for some specifics later.

Thanks again guys.


Keep in mind that texture filtering (and MSAA) will not work though. If you interpolate between packed values, you will get weird results. Also, if you spread the depth over 2 16 bits float components, the interpolation will also be broken.

Also, using only the B component for depth would work properly with interpolation only if your depth is in a linear space. If you use the gl_FragCoord.z value, the depth is not linear.

Hi bertgp,

I’d forgotten about interpolation. Actually, though, I’d doing the packing in the Fragment Shader, so Vert>Frag interpolation wouldn’t be an issue. Filtering probably would though, since I don’t think I can turn off filtering of the texture at the unpacking end. Back to the drawing-board on this one, I think.

Thanks again for all your advice, guys.


You still have at least one option for this, though it’s a bit heavy…

if you have an unchangeable LINEAR filter, you could do this instead :

  • compute the 4 nearest exact texcoords (hitting on texel centers) & fetch those texels, based on the frag-interpolated texcoords

  • DIY linear interpolation based on the offset of the texcoord to these exact texel texcoords.

You can use texture lookup for encoding. All you need is 64x3 LA texture with filters set to GL_NEAREST (which should be pretty much cache-friendly).
Each row of a texture would be a 16-bit encoding of single color component. You lookup 3 times and add all 3 results.

This will allow to encode using eny bit order so you should choose encoding that is easiest to decode. RRRBBBBB RRGGGGGG would be ok since you only need to decode R component from it and assume that low bits or B and G are simply a small acceptable error (if not then you can also decode other components properly).

Decoding R component would be simple - just address 256x1 LA texture two times (because we have two different bytes of encoded data) and sum up the result o L from first lookup and A from sceond lookup.
Or you may try 1042x1 L texture and address just once it using entire 16-bit word as a coordinate, but I’m not sure how it would work when G is large (you will likely loose precision or perhaps even hit some wrapping artifacts).

This has lesser instruction count than purely mathematical method and will work with any generation of fragment shaders. The disadvantage is that it requires texture operation to work.

But wouldn’t this method end up being significantly slower due to the dependent texture lookups?