This is not a math question but a performance question:
I have a transformation matrix M.
I have a static number of transformation matrices - say N1, N2, …Nn. These remain static, too. They’re defined once.
Based on specific info I need to multiply M with Nx. And: I need the resulting matrix ® for further “manual” number crunching. I’ve implemented this in two ways - my question to you is, which one is faster/better?
way 1 - do it via OpenGL:
for example N1 is defined/initialized as a DisplayList (some rotatef, etc.).
// select theDisplayList based on some
// specific data, then call this list
No I have R and can play with it.
N1 is defined in my own MatrixTypeDef, I do everything “internally”, I multiply M with N1 by my own MatrixMultiplicationFunction, I get R and the only thing OpenGL has to say is:
I think the 2nd one is cleaner: more Model-View-Controller like. But the 2nd one could be faster, eventually. Is this correct? Is it relevant? Is it worth it, if the application is not meant to be highly portable?