# Compressed Vertex Arrays

Hi there

Recently i browsed through ATIs “Toy Shop” slides. For geometry they used a vertex-array with compressed normals and tangents. This was some D3D specific format, which claimes to compress normals 3:1.

1. How do they compress a Normal 3:1 ? I assume this is done in hardware and is similar to 3Dc or DXTC, but for vertex data instead of textures?

2. Does OpenGL support this, too, and if so, how do i do that? I never heard of it, therefore i am sure OpenGL doesn’t support it.

Currently i am working on terrain and a 2048^2 sized terrain takes 48 MB of VRAM only for position data. I need normals too. Tangents and Texcoords can be calculated in a vertex-shader. But with position and normals (uncompressed) such a big heightfield takes 96 MB! I already got an “out of memory” error on some gfx card.

Therefore i would be very happy, if i could compress at least the normals.

Any ideas?

Thanks,
Jan.

[b]

1. How do they compress a Normal 3:1 ? I assume this is done in hardware and is similar to 3Dc or DXTC, but for vertex data instead of textures?
[/b]
I have not read those slides however I assume that it is bitpacking of three values having <-1,1> range into one 32 integer assigning 11:11:10 bits to individual components or something like that. Hw then unpacks back that integer into three float values.

I’m not sure but I think I can remember that some people on this forum or in the shader forum talked about it. It was months ago.

Also, why don’t you use some stuffs like dynamic VBO, LOD and such ? I guess it would help make an economy of memory consumption.

Hope that could help.

You can do some packing on your own. For example if you know that one coordinate of the normal has always one specific sign, you can store only two coordinates and calculate the rest from the knowledge that the vector is normalized. Additionaly by multiplication, division and fraction you can store several values in one float atrough you have to be carefull how you do that so individual components do not influence each other.

From the ATI demo here:
http://www.ati.com/developer/samples/dx9/QuantizedVertexNormals.html

here is the C code used in packing the normals, it should be easy to write something to unpack in a vertex shader.

``````
//////////////////////////////////////////////////////////////////////
//
// Packed unsigned 10-bit integers which represent 0 to 1024
//
// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
// |X|X|  (z + 1) * 511.5  |  (y + 1) * 511.5  |  (x + 1) * 511.5  |
// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
//
//
DWORD CMyD3DApplication::PACK_UDEC3(float x, float y, float z)
{
// Convert to 0 to (2^10)-1 range
DWORD dwX = (DWORD) ((x + 1.0f) * 511.5f);
DWORD dwY = (DWORD) ((y + 1.0f) * 511.5f);
DWORD dwZ = (DWORD) ((z + 1.0f) * 511.5f);

return (dwX & 0x3FF) | ((dwY & 0x3FF) << 10) | ((dwZ & 0x3FF) << 20);
}

//////////////////////////////////////////////////////////////////////
//
// Packed signed 10-bit integers which are 511 times normalized values
//
// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
// |X|X|S|      z*511      |S|      y*511      |S|      x*511      |
// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
//
//
DWORD CMyD3DApplication::PACK_DEC3N(float x, float y, float z)
{
// Convert to signed integer -511 to +511 range
int iX = (int) (x * 511.0f);
int iY = (int) (y * 511.0f);
int iZ = (int) (z * 511.0f);

return (iX & 0x3FF) | ((iY & 0x3FF) << 10) | ((iZ & 0x3FF) << 20);
}

``````

Originally posted by Jan:
1. How do they compress a Normal 3:1 ? I assume this is done in hardware and is similar to 3Dc or DXTC, but for vertex data instead of textures?
I assume it’s the DEC3 format. It’s basically 10bit per channel, 2bits unused, so that’s 32bits versus 96bits if you use floats.

Originally posted by Jan:
2. Does OpenGL support this, too, and if so, how do i do that? I never heard of it, therefore i am sure OpenGL doesn’t support it.
It’s only supported in D3D. AFAIK only ATI supports it. Of course, it could be added to GL as an extension if there’s sufficient demand. So far I’ve never heard anyone ask for it.

Sorry for not posting in a while, my laptop broke (the gfx card of course ).

Yes, it was the DEC3 format. I never heard of compressed vertex-formats before. If it’s possible, i would like to see this as an extension added to OpenGL. I think this is an option to save bandwidth (and thus increase speed), which is becoming more and more practical, since today nearly every model needs normals/tangents/bitangents and if i can store all of them in 96 Bit, instead of 3 times the memory, this would be very practical. On a heightfield i can get away with only storing the normal and calculating the rest on the fly, but usually that’s not possible.

I assume that 10 Bits per component are sufficient in most cases, are they?

And thanks for the idea to use dynamic VBOs. I think i will split my terrain into 1024^2 sized pieces and upload only the lower lod-levels, if one is too far away.

Thanks,
Jan.

Originally posted by Humus:
It’s only supported in D3D. AFAIK only ATI supports it. Of course, it could be added to GL as an extension if there’s sufficient demand. So far I’ve never heard anyone ask for it.
I’d be very glad to have this extension at hand ! so you can count at least one demand

I’d also be glad to see GL_ARB_pixel_buffer_object on ATI, or at least some extension that may reproduce “render-to-VB” in OpenGL !

Originally posted by Jan:
Yes, it was the DEC3 format. I never heard of compressed vertex-formats before.
Well, I wouldn’t call it compressed. I’d call it “packed”.

Originally posted by Jan:
I assume that 10 Bits per component are sufficient in most cases, are they?
Absolutely. In fact, I think 8bit per channel will work in the vast majority of the cases. You can already use something like 4 bytes for the normal(and perhaps use the last component for something useful).

Humus: you meant 8bit per channel, right ?

Yes, fixed.

I have been using GL_BYTE for normals, it gives good results if the attribute is interpolated to higher precision (which it should).