i saw the Ferrari Demo on a R9700 now and i have a question on the normalmap format they are using.
I know perpixel lighting on fragment basis, but i used 8x8x8bit components to compute the normalvector. This little demo show the usage of 16x16bits instead. Ofcourse it looks amazing precise, but i don’t understand how they compute the third component out of the texture. Can some1 help ?
I know that it uses floatingpoint precision, but how can i reconstruct a third component for the normal from a 16x16 texture ?
Vector solution
r = sqrt(xx+yy+zz), where x, y, z - coordinales of a vector, r - it’s length.
Normal’s length is always 1(normalized), so:
z = ±sqrt(rr-xx-yy), and z is always positiv.
However, in OpenGL, ATI supports the RGB[A]16 internal format, so you could just upload it as you normally would only with unsigned_short or float at the source data format. That’s how I do it now, and it saves fragment program instructions.
Otherwise you would have to upload as an lum_alpha 16bit/channel texture (32bit total) and do some swizzling in the fragment program then re-calculate the z channel. That could save texture memory.
Just remember, on current hardware (R3xx, NV3x) floating point textures aren’t filtered, so you would have to deal with nearest filtering (with the GL_RGB16 internal format you get filtering).
How do u mean they are not filtered ? Is it something that still has to be implemented like float arb_pbuffers or is it a limitation in hardware and i will end up writing fragment programs for filtering purposes (if this feature will get added to OpenGl later, i would not mind now if filtered or not).