Calculating camera position from scene boundingbox

So I have a scene of objects and a camera.

What I would like to do is calculate the position of the camera such that:

  1. All objects are fully contained in the viewport AND
  2. The camera is as close as possible to the objects.

So in more plain english, the effect I’m going for is that all objects in the scene can always be seen, and the camera is located as close as possible to the objects where this is still the case.

I realise that a number of constaints must be applied to the problem so that one unique solution (I.E. one unique position of the camera) can be calculated.

These constraints are:

  1. The camera will always look down the negative Z axis
  2. The X and Y coordinates of the camera position will be the center of the bounding box of the objects
  3. The projection matrix is created using glFrustum

I hope that the problem is clear and that I have listed all the constraints required for there to be one solution.

Does anyone have any idea of how I can solve this?


The problem is underconstrained. You haven’t specified whether your view frustum is symmetric or assymmetric (presumably symmetric). Further the X and Y FOV angles are unspecified. For the symmetric case, look at the interface to gluPerspective (which is just a wrapper around glFrustum). That’ll help you see the problem.

Imagine the problem in 2D. Draw an isosceles triangle rooted at the center vertex by the eyepoint, and surrounding your scene (which you can represent with a circle). Think about the geometry of this. Given your above constraints + the FOV angles, a little bit of trig will give you the answer you seek in X, and then in Y.