Maybe you have noticed the Inferred Lighting in Siggraph 2009. (Check http://diaryofagraphicsprogrammer.blogspot.com/2009/08/siggraph-2009-impressions-inferred.html, if you don’t.)
The paper describes its 16bit GBuffer as this:
GBuffer.x = normal.x; GBuffer.y = normal.y; GBuffer.z = linear depth in viewspace; GBuffer.w = (ObjectGroupID << 8) | NormalGroupID;
The question is, what’s the GBuffer’s internalformat?
Obviously, normal and depth are float, which can be saved as 16bit float. But the two IDs compose an 16bit unsigned short. And GLSL can not save int value AS float like DX10. Any idea?