how can I import camera calibration matrix into OpenGL?


I have a camera projection matrix P[3x4] that I obtained by calibration.

I would like to create the exact same view in OpenGL as the one described by the calibration matrix. To test the fidelity of the result I can compare the image rendered in OpenGL with the image taken by the calibrated camera (I have the image of the calibration object and the 3D coordinates of the calibration points).

I can’t figure out what is the connection between the GL_PROJECTION and the matrix P. According to Hartley and Zisserman camera matrix P can be decomposed as P = K[R|t], where [R|t] can be used as a MODELVIEW matrix in OpenGL. However, the upper triangular matrix K (the intrinsic camera parameters) doesn’t seem to be directly useful in specifying the projective transformation in OpenGL.

Any advice on how I can incorporate the intrinsic parameters info into OpenGL GL_PROJECTION matrix would be greatly appreciated!



the easiest way is to decompose your intrinsics into parameters (focal length, skew and principal point) and build your OpenGL-style projection matrix using glFrustum. That’s how I do it. (I work with calibrated cameras with real scenes and so know what kind of annoying pain you’re in :wink:

The alternative is to premultiply your intrinsics matrix with a homography that maps the image plane in terms of pixels to the canonical OpenGL image plane bounded by +/-1.

Of course, either way, you’ll need to define some z scaling for the third column of your OpenGL projection matrix, since Hartley & Zisserman haven’t addressed the z-buffer in scene reconstruction :slight_smile:

hope this helps


Hi John,

thanks for the reply. I had a go with decomposing the intrinsics and it seems fine so far.