I’m making a simple test program for VR Stereo 3D rendering and was trying to move cameras but in my program the Y translation values seem to have an inverted effect on the view compared to what I was expecting. The following makes the camera move Up,
makeFrustum(50, 1.0, 0.1, 100);
glTranslatef(0, -0.5, 0);
glViewport(0, 0, w/2, h);
I thought since I was in PROJECTION mode the glTranslate should transform the camera the other way. But apparently not.
So to continue with this I’m looking for some general advice in how to approach building up a transform hierarchy in my scene?
To have grouped transforms for objects, and to be able to do transforms that work in the local space of any primitive/dag location?
OpenGL doesn’t have a “camera”. Vertex positions are transformed by the model-view matrix then the projection matrix:
p’ = projection * modelview * p
The only difference between the two is that lighting calculations are done before the projection matrix is applied, as the projection matrix often contains a projective transformation while the lighting calculations need to be performed in an affine space.
Typically, you implement a camera by making the inverse transformation the first transformation applied to the model-view matrix. E.g. to move the camera up, you move all of the objects down, etc.
Legacy OpenGL doesn’t have functions to invert an arbitrary transformation but the transformation primitives (translate, rotate, scale) are trivially invertible and (ABC)-1 = C-1B-1A-1, so you invert the primitives and apply them in reverse order.
Thank you for the clarifications!
I’m now looking at some videos about making Open Gl cameras!