hello guys

I was reading the opengl superbible 3rd
and if I doesn’t understand wrong (cauz i’m brazilian :smiley: heheheh)

the book tell that transformations(translation, rotation and scale ) it’s more fast to calc if do your own methods because they can be accelerated by a gpu card(hardware t&l - transformation & lighting)

instead use glTranslation, glRotation and glScale cauz they wasn’t implementing for use hardware acceleration.

it’s confusing what I said?
if not, it’s right?

thanks you all


Yes, it is a bit confusing :slight_smile:

I think you reversed it :

  • using glTranslation, glRotation, glScale, or glMultMatrix allow the driver to use the GPU hardware T&L if available.
  • if you do it by hand, it will not be hardware accelerated, because it will run on the CPU.

hi ZBufferR, nice? :slight_smile:

but, I bring what I red in book:

Many OpenGL implementations have what is called hardware T&L (for Transform and Lighting). This means that the transformation matrix multiplies many thousands of vertices on special graphics hardware that performs this operation very, very fast. (Intel and AMD can eat their hearts out!) However, functions such as glRotate and glScale, which create transformation matrices for you, are usually not hardware accelerated because typically they represent an exceedingly small fraction of the enormous amount of matrix math that must be done to draw a scene.”

so… what it means?
glTranslate, Rotate,Scale don’t use GPU?

if I create my own transformations I can use the GPU to speed their calcs?

cauz… it’s what it seems when you read above…

and this text is in a topic about advanced Matrix manipulation

and I found now this:

“Once again, you should remember that the glMultMatrix functions and other high-level functions that do matrix multiplication (glRotate, glScale, glTranslate) are not being performed by the OpenGL hardware, but usually by your CPU.”

really nice huh? :smiley:



Of course, the single matrix multiplication does not need a lot of power (some multiplications+some additions), and can be more efficiently by cpu.

But then each of the vertices of the scene must be transformed by the projection and modelview matrices, and with million of triangles, it means quite a lot of computations (better done on the T&L gpu).

What’s uuuuppp!

So… If I just want to move, scale e rotate simples objects (simple models) I use glTranslation, Rotate and Scale.

But if I have like in a game developing
a Characters Models, a Terrain and a I need to transform the vertices of each of them, I can use my own matrix calculations for to be accelerated by OpenGL Hardware?

that’s it?

thanks :slight_smile:
Happy New Year for ALL :smiley:


I don’t know if this is any clearer or not, but here’s how I think of it. You’re really talking about two different operations. One operation is the construction of the matrix you use to transform your vertices, and the second is the actual transformation of the vertices using the matrix you’ve constructed.

It really doesn’t matter how you construct your matrix – you can use the OpenGL functions glTranslate(), glRotate(), and glScale(), or you can construct the matrix yourself with glLoadMatrix() and glMultMatrix(). Since these operations take place relatively rarely and they aren’t computationally complex all of these calculations are probably going to take place on the CPU.

It does matter how you actually use the matrix you’ve constructed though. One way to do this would be to retrieve the matrix you’ve constructed with glGetFloat() and then to do the transformation yourself by multiplying each vertex with this matrix. This isn’t a very good idea however, since a) it’s a nontrivial amount of work, and b) you’re guaranteed not to take advantage of any special hardware that can accelerate the transform operation for you. If you let OpenGL do the transformations for you it is possible to take advantage transform and lighting hardware.

So, in summary, don’t worry about how you’re creating your transformation matrix. As long as you let OpenGL translate your vertices for you you’re in good shape.


it’s a bit confusing :smiley: heheheh Like ZbufferR said.

But simplifing I would like to know if it’s better to construct my own matrix to calculated the transformation of vertices(and this matrix it will use by OpenGL hardware to speed) than to use glTranslation, glRotate etc.?

For example: to animate a character I need to calc the matrix of rotate and translation of each bone and multiply this matrix to each vertice link to this bone (simple resume)

the calculation of transformations I do to create my own matrix to translate and rotate each bone and then apply to vertices, cauz I can’t use glTranslate, glRotate (I don’t know if it’s possible).

So, if I create my own matrix calculations, may I to use OpenGL hardware to speed calculations and take off the amount of this to CPU?

Or there is a method to transform vertices like a said above for a character animation using OpenGL functions?

I expect that I’m not to repetitve
heheheheh :smiley: