Should I use gl* functions (gltranslate,glrotate,glscale…) or use my own matrix multiplying functions? Which is faster? Does both have hardware acceleration?

It does not matter. The matrix multiplication is not hardware accelerated either way, but that doesn’t matter because that’s not going to be your bottleneck.

Which one is faster depends on how you implement your own matrix functions . I think you can assume that the gl* functions are pretty fast, but it’s certainly not impossible to write faster ones.

I was always beleiving that glTranslate and such were done on the hardware side, thus are accelerated since T&L cards (geforce 1). Did I missed something or did I misunderstood what you wrote ?

OpenGL SuperBible 3rd edition says this about T&L:

“Many OpenGL implementations have what is called hardware T&L. This means that the transformation matrix multiplies many thousands of vertices on special graphics hardware that performs this operation very, very fast. However, functions such as glRotate and glScale, which create transformation matrices for you, are usually not hardware accelerated because typically they represent an exceedingly small fraction of the enormous amount of matrix math that must be done to draw a scene”

I’m confused now. Should I avoid using gl* functions?

T&L simply means that vertex transformations are processed by the graphics hardware and not by the CPU. A part of vertex transformation is the multiplication of the vertex by the modelview matrix that is stored within GL state. The functions like translate etc. operate on the current matrix, altering it. This functions are not accelerated because there is no use in accelerating them (single matix multiplication is fast enough on CPU, probably even faster then sending the data to the GPU).

Hence, you won’t see any difference in performance regardles if you use own matrix functions or the gl functions… Even more, you can safely assume that this operations take about 0% of the processing time

Thanks, that made everything clear.