I’m changing my application to get away from the shader builtins. What’s the preferred method to replace gl_NormalMatrix? I’m assuming computing the transpose of the modelview inverse on the CPU is the best approach. Maybe this belongs in the math forum, but can someone point me to fast matrix inversion code?
I took a look at Mesa. Unfortunately, it supports only OpenGL 2.1. GLSL 1.3, which was introduced along with OpenGL 3.0, was the first GLSL version to deprecate gl_NormalMatrix.
Right now, I lazily compute the NormalMatrix and call glUniform on the 4 matrices-- NormalMatrix, Projection, ModelViewMatrix, and ModelViewProjection. The compute and upload is done right before draw calls with dirty matrices. However, the performance is terrible.
Before I upgraded my application from GLSL 1.2 to 3.3, my frame rate for a particular view was 29 ms. After the upgrade, my frame rate was 38 ms. If I remove the inverse computation for the NormalMatrix, the frame rate improves to 30 or 31 ms.
So, maybe I just have a awful matrix inverse routine. Still, I was using one generic glLoadMatrix before. So, I imagine the driver would have had to use a generic inverse method.
Isn’t everyone else in the same situation? What have other people done to minimize the performance impact?