Beginning with vertex programs


I’m just about starting to dive into vertex programs (ARB_vertex_program, NV_vertex_program etc.), but before I start I want to find out if I’m right about the following:

A vertex program will transform the three vertices of a triangle and then the triangle gets rasterised just as if it were drawn in the regular OpenGL pipeline?
So ‘all’ the vp will do is calculate three sets of [xyz co-ordinates, (multiple) texture co-ordinates, light values] and then for the triangle these get interpolated?

Or is a vp called whenever OpenGL has calculated an interpolated co-ordinate, (set of) texture co-ordinates, light value during the rendering of the triangle?

At the moment this is all very vague for me. I keep getting confused because on the one hand they are called vertex programs, but on the other hand they seem capable of doing per pixel operations…

Thanks in advance for explaining this part, because without this basic starting point I am continually lost while reading these vertex_program extension specs.


The vertex program is called once for each vertex you pass to OpenGl (Ofcourse, if you pass the same vertex twice, the program will be called twice aswell).
The coordinates, color, texture coords etc. you calculate in that program (and pass on) are then interpolated across the polygon just as if they were created by the normal T’n’L steps.

You cannot do per pixel effects with vertex shaders. However, when doing per pixel effects, you often need to do some per vertex setup, which can be done using vertex programs. This might be the cause for your confusion. Most per pixel effects do some per vertex setup aswell.

Hope it helps