Basic projection model

Hello,

I use perspective model to display a mesh obj. I want to adapt the distance of my object from the camera to have a precise size of my 3D object on my image rendering.

I get the perspective matrix :

Matx44f(2 * fx / w, 0, 1 - 2 * cx / w, 0, 0, 2 * fy / h,
                 2 * cy / h - 1, 0, 0, 0, (f + n) / (n - f),
                 (2 * f * n) / (n - f), 0, 0, -1, 0);

With
Matx33f K = Matx33f(fx_fy, 0, width / 2.f, 0, fx_fy, height / 2.f, 0, 0, 1);

// camera image size
  int width = 1280;
  int height = 720;
  // near and far plane of the OpenGL view frustum
  float zNear = 0.2;
  float zFar = 1200.0;

  float fx_fy = 1067;

My model view matrix is as follow

      Matx44f modelViewMatrix = lookAtMatrix * (pose);
      Matx44f modelViewProjectionMatrix = projectionMatrix * modelViewMatrix;

And the position of my 3D object is:

distance = 250.0; // 500 , 750, 1000
float tx = 0;
float ty = 0;
float tz = distance;

With

I rendering my object in the center of my camera and I measure my object width on the rendering image at many distances.

I get that results:

distance(mm) → width(pixels)
1000 → 133
750 → 179
500 → 271
250 → 645

I don’t really understand the pixel value. Mathematically, between the measurement at distance 250mm and 1000mm, I should get a x4 factor on the width measurement ?

Did I made something wrong ?

You’r right that there seems to be a rough approximation (or other error) somewhere.
I cannot tell if it is caused by pixels not being quadratic. You can get the pixel dims by dividing screen-width/height by the pixel-size-x,y of your screen.
Without being a specialist, I’ve seen at least two different build-up presentations of projection-matrixes. Since they not likely are equal, some sizable approximation should be likely. The matrix may not truthfully present what you seem to perceive by your eye in the real world

What about the look of it? Is it agreeable to your perception?

Your math presentation is not usable since the details are variables that are not represented at the levels they are supposed to be a part of.

Is your object completely flat, at z=0 in object space?

If it isn’t, you can’t expect the result to be exactly inverse-proportional.

My loot at matrix is integrated ad mention in modelViewMatrix:
lookAtMatrix = Transformations::lookAtMatrix(0, 0, 0, 0, 0, 1, 0, -1, 0);

And my lookatMatrix :

Matx44f Transformations::lookAtMatrix(float ex, float ey, float ez, float cx,
                                      float cy, float cz, float ux, float uy,
                                      float uz) {
  Vec3f eye(ex, ey, ez);
  Vec3f center(cx, cy, cz);
  Vec3f up(ux, uy, uz);

  up /= norm(up);

  Vec3f f = center - eye;
  f /= norm(f);

  Vec3f s = f.cross(up);
  s /= norm(s);

  Vec3f u = s.cross(f);
  u /= norm(u);

  return Matx44f(s[0], s[1], s[2], -s.dot(eye), u[0], u[1], u[2], -u.dot(eye),
                 -f[0], -f[1], -f[2], f.dot(eye), 0, 0, 0, 1);
}

The value of is :

lookAtMatrix [1, 0, -0, -0;
 0, -1, 0, -0;
 -0, -0, -1, 0;
 0, 0, 0, 1]

All my matrices values are:

pose [1, 0, 0, 0;
 0, 0.85065019, 0.5257321, 0;
 0, -0.5257321, 0.85065019, 250;
 0, 0, 0, 1]
modelViewMatrix [1, 0, 0, 0;
 0, -0.85065019, -0.5257321, 0;
 0, 0.5257321, -0.85065019, -250;
 0, 0, 0, 1]
modelViewProjectionMatrix [1.6671875, 0, 0, 0;
 0, 2.5212326, 1.5582116, 0;
 0, -0.52590734, 0.85093373, 249.68326;
 0, -0.5257321, 0.85065019, 250]

No my object is a 3D model from obj/mtl.

What do you mean by “If it isn’t, you can’t expect the result to be exactly inverse-proportional.”

The distance of a projected vertex from the projection centre is inversely proportional to the distance of the vertex in front of the viewpoint.

Note that it’s the (eye-space) Z coordinate of the vertex that matters, not the Z coordinate of the origin of the object.

Also, as the object moves towards the camera, the vertices which form the silhouette (and thus determine the screen-space bounding box) may change.