I’m just about starting to dive into vertex programs (ARB_vertex_program, NV_vertex_program etc.), but before I start I want to find out if I’m right about the following:
A vertex program will transform the three vertices of a triangle and then the triangle gets rasterised just as if it were drawn in the regular OpenGL pipeline?
So ‘all’ the vp will do is calculate three sets of [xyz co-ordinates, (multiple) texture co-ordinates, light values] and then for the triangle these get interpolated?
Or is a vp called whenever OpenGL has calculated an interpolated co-ordinate, (set of) texture co-ordinates, light value during the rendering of the triangle?
At the moment this is all very vague for me. I keep getting confused because on the one hand they are called vertex programs, but on the other hand they seem capable of doing per pixel operations…
Thanks in advance for explaining this part, because without this basic starting point I am continually lost while reading these vertex_program extension specs.