i have done a “camera calibration” and retrieved now

the Rotation and Translation - Matrix (from world coordinate to camera
coordinate)
(== exterior orientation). i loaded this matrix into the modelview-matrix.

the interior orientation (camera coordinate to sensor coordinate), which
consists of:
a) the focal length of the camera f
b) the “image main point” (the point where to optical ray through the center
of
projection intersects the image plane perpendicular) (cx, cy)
c) scaling factors (for each axis) dx, dy (ratio “camera coord / sensor
coord”)

Now my question:
How do i have to generate the projection-matrix (GL_PROJECTION) from the
parameters f, cx, cy, dx, dy ???

Focal length? gl doesn’t do raytracing (necessary to emulate te behavior of a real camera). It is a projection calculation for the vertices onto a plane.

You can compute the aspect ratio with (dx, dy), you can decide on what field of view angle to use based on how close you want to the place the virtual camera.

y=distancetan(angle);
x=y(dx/dy);
glFrustum(-x, x, -y, y, distance, far);

cx and cy can put placed in the modelview using a translation function.

I’m not sure if this will give you the results you want.

I could also suggest that you use the “enhanced perspective” functions present in the red book. They give you the effects of a camera, like zoom. I think the function names were aafrustum and aaperspective. I have them in my glh library also.

I actually had this problem a few months ago. It’s not that difficult to get a good approximation. OpenGL can’t do things like skew or radial distortion… at least not easily. I made a web page (it’s rough yet) but it explains how my program works: calib2gl and has some testing. Towards the end is the accuracy part. Soon I’ll fix my webpage and maybe it’ll get linked in google or something. http://www.seas.upenn.edu/~aiv/calib2gl/

the camera.c and camera.h files have the projection function which you’re welcome to use, it’s opensource. the source is up as well.

As to what you do, it’s just math to calculate a viewing frustum from these parameters. I notice you don’t have the ccd size (pixels that is) You need that too. At least if you want to completely approximate the camera… 640x480 camera 640x480 viewing window.

2 equations from computer vision:

sx = f/dx
this says that focal length divided by pixel size gives you a scale factor if you wanted to calculate an image by yourself. so if your image is 3 pixels from the center in the x, then you multiply 3sxZ to get how far the actual image is from the image center. so units are mm/pixel (units are important)

dx = Fw/w
this is the pixel size in mm, so we take the frustum width and divide it by the width in pixels and we get mm/pixel (ie the size of one pixel on the ccd chip)

Solving for Fw you get:

Fw = w*f/sx

Do the same for Fh.

left = Fw*(-cx/w)
right = Fw*(1-cx/w)
bottom = Fh*(-1+cy/h)
top = Fh*(cy/h)

I followed your interesting discussion and I’ve downloaded the Calib2ogl code.
I have a calibration method with generates a matrix from which I get a set of intrinsic camera paramenters suitable for the glPerspective call.
My question is I think a bit trivial but can’t find an answer: do you think I can transform gluperspective into a glFrustum? moreover my method doesn’t return me a nearClip distance…so would it effect the final overlapping visualization?