i’m working on an impostor (billboard) rendering demo where the impostor texture is re-generated whenever necessary due to a view change. this is done by rendering the geometry to a texture. since the impostors are flat, for lighting a normal map would be ideal.
however, i think generating a normal map every 20 frames or so in software would be rather expensive. is there a possibility to let the GPU do this for me somehow?
can anyone point me to some good resources about realtime normal map creation?
If you have “shader” capable hardware, it’s fairly easy, as the normal is available as an input to the shader; just offset and spit out to the output texture.
With the fixed function, I can’t offhand think how to do it, unless you’re OK with sending object space normals, in which case you can probably bind your normal array as a GL_FLOAT ColorPointer array, although offset/scale is still an issue.
what exactly do you mean by offset and copy to output texture (i haven’t done a lot of shader work yet)?
like, the RGBA pixels and the normal map are on the same texture, aligned side-by-side and i would just shift my normal map “pixels” to the place i want them to be in my texture? is this it?
if you want to render the normals of your rendered billboard…ehh impostor, just feed your normal-array (i assume you are using vertex arrays) into a texcoord-array, and bind a normalization-cubemap to this.
another method is to use NV_texgen_reflection, which has also a texgen mode for generating normal-output for dot-ppl-lighting.
I did this some time ago. The only problem is if you are using tangent-space normals. Then you have to convert the normals to tangent-space, per vertex.
The fragment shader is real simple though: Simply compress the normal from [-1;1] -> [0;1] and output it as a color.
struct v2f {
float4 position : POSITION;
float3 normal : TEXCOORD0;
};
void main(v2f IN, out float4 colOut :COLOR0) {
colOut.rgb = normalize(IN.normal.xyz)*0.5 + 0.5;
colOut.a = 1;
}
By “offset and scale” I mean that you multiply by 0.5 and add 0.5, to compensate for the fact that color components are in the range [0,1].
To render to normal map, just render as usual. Except, don’t do lighting; once you have a calculated normal value, just output THAT to the render target. That’s where offset/scale comes in, because normal components are in the range of [-1,1].