Migrating to OGL 3.x and MV and Proj matrices

Now that the gl_*Matrix is deprecated we need to supply the shaders with our own matrices via uniforms. So far so good.

Imagine I have the following part of a vertex shader:

uniform mat4 my_projection_mat;
uniform mat4 my_modelview_mat;
gl_Position = (my_projection_mat * my_modelview_mat) * my_vertex;

My question is: Will the driver perform the (my_projection_mat * my_modelview_mat) calculation for every vertex? Or it will understand that this calculation needs to be done only once?

I think, just to be on the safe side it would be better if you did it yourself and pass it on to the shader. Some compilers might optimize it away but you can’t be too sure.

I would also err on the side of caution and provide my_modelviewprojection_mat. Otherwise, this optimization could be tricky for the compiler. Since “my_projection_mat * my_modelview_mat” is computed independently for each vertex, wouldn’t the compiler have to replace your two uniforms with a single uniform? Then it would need to pretend that both of your uniforms still exist and when you modify one, it would compute the final matrix on the CPU. I don’t know enough about GLSL compilers to say if something like this is common. I kind of doubt it. But I’d be curious to know.


If you regroup your expressions to favor matrix-vector over matrix-matrix multiplies you come out about even in your case.

Make all you matrices row major and it’ll work out that way automagiacally in a sweep from left to right.

Though just having said that a smart optimizer could probably rearrange things a bit too. Although for larger, more complex subexpressions…