in almost every OpenGL tutorial I read on the web they use some form of Math library like GLM for example. I usually write my own math stuff via metaprogramming (I generate the code necessary to do matrix multiplication etc)
My question is: Instead of glm::translate, glm::rotho, etc why not use the gl equivalents: glTranslatef, glOrtho, etc? (aside from the fancy C++ operator overloading that lets you do matrix multiplication more conveniently). You might say well why not try and see for yourself? Well, looking at the parameters glTranslatef takes, it takes 3 floats which are the amount of translation in each axis, but… what is it translating? it also returns void and not a translation matrix I could apply on my objects.
So I guess I’m not sure I understand how glTranslate, glEtc functions work, and whether or not I could use them to get the same results I get from their glm equivalents. Are these gl functions only for compat profile? or usable in core too?
Well, there are many reasons not to use those GL functions to do generalized math.
A lot of them fall into this category: those functions were removed from 3.2 core profile. So for a lot of people, and a lot of tutorials, those functions flat out don’t exist.
Others fall into a different category. Namely, that GL’s functions are purely for building matrices that OpenGL itself uses for various processes. Which means that, if you just want to multiply two matrices, you have to glLoadMatrix one, glMultMatrix the other, and then use glGetFloatv to get the matrix back. This is obviously silly, but it is also needlessly slow.
To answer your question as to what those functions do, (compatibility) OpenGL has a number of matrices. There is the GL_MODELVIEW matrix, GL_PROJECTION matrix, and the GL_TEXTURE matrix (one for each texture unit). When doing fixed-function vertex T&L, positions are multiplied by the MODELVIEW matrix (normals are multiplied by the 3x3 inverse-transpose of MODELVIEW). The positions are then multiplied by the PROJECTION matrix. If various texture state is set, then the texture coordinate for a particular texture unit is multiplied by its TEXTURE matrix.
All of the OpenGL matrix functions operate on the “current” matrix. The current matrix is set by glMatrixMode. So if you call glTranslatef, it will create a translation matrix and right-multiply it into the current matrix, making the result the new current matrix for that particular matrix.
OpenGL’s matrices each have a stack, so you can use glPushMatrix to preserve the current matrix and glPopMatrix to restore the most recently preserved matrix value.
But again, if you’re using modern OpenGL, then you can ignore all this.
Thanks for the reply. You mentioned they were slow, why would they be slow? I mean what else does glTranslatef do other than, well do translate stuff. Any other reasons why these functions got deprecated? maybe there’s an issue from the usability and convenient API point of view? It would be nice if GLEW could somehow annotate those functions with a ‘deprecated’ macro that expands to nothing or something that way we can stay clear from those functions.
They are slow because the data has to travel from your program to OpenGL. Every OpenGL function call is expensive, especially if data has to be moved around from main memory (or CPU cache) to video memory and back. And in the end, the OpenGL function can not do anything you cant do yourself. By using a math library you get guaranteed time constraints, independent of hardware of driver implementations.
It’s glGet() that’s slow. Typical OpenGL commands simply append a command (an opcode and parameters) to the command queue then return immediately. glGet() has to do that, then flush the commands to the GPU, wait for the GPU to finish executing all of the queued commands followed by the glGet() command, and return the data to the CPU. At the extreme, if you’re using OpenGL on X11 with a remote X server, there’s a network connection involved, meaning that glGet() can plausibly be up to eight orders of magnitude (a hundred million times) slower than “normal” OpenGL commands (a few hundred milliseconds of network latency versus a few nanoseconds to append a command to a buffer).
Because there’s not much use for them outside of “toy” programs. Often you need the matrix on the CPU for purposes not related to rendering, e.g. physics simulation. In that case, it’s orders of magnitude faster to generate the matrix on the CPU then copy it to the GPU than the other way around. Also, having a fixed set of matrices (model-view, projection) is inflexible compared to just using matrices as uniforms or attributes of a shader program.
As per my experience there are many reasons not to use those GL functions to do generalized math.
Actually lot of them fall into this category. So for a lot of people, and a lot of tutorials, those functions flat out don’t exist.
Others fall into a different category. Namely, that GL’s functions are purely for building matrices that OpenGL itself uses for various processes. Which means that, if you just want to multiply two matrices, you have to glLoadMatrix one, glMultMatrix the other, and then use glGetFloatv to get the matrix back.