heres my plan. pass the light into the primary color and the half angle in the secondary color. use the combiners for bumpmapping. but i have to get the vectors in the same space as the normals. Could someone explain to me how i can do that w/o using vertex programs? Sorry for asking a stupid.
you apply the same transformation (as you would have done in the vertex program) in software to each vertex, writing the result as a new primary color array (light) and secondary color array (half angle). if you are using VAR, write the results to a temporary buffer then copy the whole lot into VAR memory in one go.
the values you write to your primary [glColorPointer] and secondary color [glSecondaryColorPointer] arrays will be interpolated across your triangles and input to your register combiner program. beware - they will get denormalized across the triangles.
is that all you wanted to know? i assume you already know how to do the transformation, it’s the same one you would do in a vertex program. if not, there are docs on nVIDIA’s site that will set you straight. look for “a practical and robust bump-mapping technique” or something like that.
You could calculate the light position in object space (where your normal is) and then calculate your light vector and half angle vector from there. This would involve multiplying your light’s position from world space by (I think) the inverse-transpose of the matrix that moves the object to world space (CameraMatrix * Object2WorldMatrix = ModelViewMatrix). You want
Object2WorldMatrix^-T * LightPositionWorldSpace = LightPositionObjectSpace
Do the same thing for your half angle vector and your camera’s position.
Object2WorldMatrix^-T * CameraPositionWorldSpace = CameraPositionObjectSpace
Normally people know the light and camera’s positon in world coordinates and not it’s camera coordinates, which is why you don’t use the entire modelview matrix.
Does that help any?