hmm… should I normalize my vectors myself or let openGL ICD do it?

Redhat Linux 7.1
accelerated drivers from Nvidia (so hardware acceleration under X )
Riva TNT2 Ultra (Viper 770 Ultra)

So, which choice should be faster?

Development time wise letting OGL do it is faster, runtime wise calculating it yourself is faster.

nVIDIA’s FAQ recommend that you always do it as the extra cost is very small.


Thank you, Mr. Shoe… if did it both ways and checked which one is faster…

Can I trust that it will be faster that way on all machines?? (at least with Nvidia ICDs) or is it always that the ICD is slower than your own calculations… even on hardware T&L machines??

ahh well that answers the question! thanks michael

First of all try to avoid scaling the modelview matrix. If you have to, use uniform scaling and the rescale normals extension if you’re running on machines without HW T&L, else just enable normalization (you can’t do it yourself btw, unless your doing all the transformations and lighting yourself).

as harsman saiz. u only need to enable it when u rescale your models eg with glScale(…). other leave it switched off.

hmm… if I ever use hardware scaling… it only does them once right? not every frame?

When you scale the current matrix (usually the modelview, but it could be projection or texture) it stays that way until you change or push it. And the matrices affect all vertices passed to OpenGL. This is normal though, and nothing you should worry about.