Encoding multiple floats into one

I am displaying quake3 levels using lightmaps. I am using deferred shading, so I end up needing to render the rgb componets of the lightmap into the g-buffer. I am using a high precision G buffer (16 or 32 bit), and since the lightmap is 8 bytes per component, it is a waste to use 3 slots of the g-buffer for the lightmap. At most I should need 24 bytes, which is 2.5 of the 16 bit or .75 or the 32 bit.

So the question:
What is a good method to encode several floats into 1? It’s sort of the space filling curve question…

Its pretty clear how to do this for integers by using shifts, etc… but I need it to work in shaders…

Any thoughts or references? Thanks!

#include <iostream>
#include <cmath>

int main(int argc, char *argv[]) {

for (int i = 0; i < 255; ++i) {

for (int j = 0; j < 255; ++j) {

for (int k = 0; k < 255; ++k) {

float x = i/255.0f;
float y = j/255.0f;
float z = k/255.0f;
float enc = x + 255.0f*y + 65025.0*z;

// std::cout << "Encoding " << x << ", " << y << ", " << z << std::endl;

// std::cout << " result:" << enc << std::endl;

float ipart;
float xd = 255.0f*modff(enc, &ipart);
float yd = 255.0f*modff(enc/255.0f, &ipart);
float zd = 255.0f*modff(enc/65025.0f, &ipart);

// printf("Decode:%10.6f, %10.6f, %10.6f
", xd, yd, zd);
// std::cout << “Decode:” << xd << ", " << yd << std::endl;

float xerr = std::abs(xd-i);
float yerr = std::abs(yd-j);
float zerr = std::abs(zd-k);
float err = std::max(xerr, std::max(yerr, zerr));
printf("	err:%10.5f

", err);


return 0;

Maybe something like this? It seems to preserve the precision reasonably well. Will have to see if there is a modf in GLSL…
Found the needed link at:
encode rgba to float
Basically the reverse procedure…
Why 65025???

Thus seems to give a maximum error of about 0.9 (scale of 0-255), which is probably fine… On the cpu, anyway…

Just found this as well, oops:
gl solution

Several bits:

  1. If you are using GL3 hardware you can have multiple render targets, each target with a different format. For simplicity you can make another render target and store the lightmap data as GL_RGB8 (8-bit fixed point clamped), no conversion needed, though AFAIK, the hardware might implement GL_RGB8 as GL_RGBA8, so you “waste” one bytes per pixel.

  2. If you are already using “too many render targets” then consider GL_EXT_texture_integer. GeForce 8/9/2xx/3xx can render up to 8 textures at the same time.

Even under Mac OS-X (with a GeForce 8/9/2xx/3xx) both of these are possible [as Mac OS-X exports both GL_EXT_texture_integer and GL_ARB_framebuffer_object]

On the other hand if you are using GeForce6/7 neither of these is possible and they have a render target limit of 4 buffers, significant performance clip at 3 (I think) and all buffer targets must be the same format.

Thanks, I wasn’t aware of that!

I will keep this in mind. My card (gtx 260) is pretty advanced, but I sort of want to ‘play in bounds’ as I learn this stuff.
I’m not near the limit yet, but I can see how you could get there pretty quick. Especially using more bytes than the precision of your data.

If you are using GL3 hardware you can have multiple render targets, each target with a different format.

Actually, this is not restricted to GL 3.x hardware. Any card that supports ARB_framebuffer_objects (as opposed to EXT_framebuffer_objects) can handle targets with different formats. According to the OpenGL extension viewer database, this includes most GL 2.x hardware (even down to the Radeon 9550).

GeForce 8/9/2xx/3xx can render up to 8 textures at the same time.

So can any DX10-class hardware.

That’s really good to know. With that I am ok to assume up to 8 textures, which should be way more than I ever need (oops having a 64k moment there… :slight_smile: )! Thanks. Of course, I will probably still do the compression just to not use any uneeded bits and to reduce bandwidth… But as things progress I can definitely see needing more buffers.