So let suppose i have a camera and i have object in real world, i was able to extract the global projection matrix of that camera P = K[R|T] which is a 3 x 4 matrix, and let’s say i have a 3D model of that object
now in opengl i want to render that 3D object using the real camera matrix, the intuitive solution is just multpying the positions by that camera P will yiled a view simlar to what the real camera captured, the thing is i tried it and i didn’t work
can some one point me where to look

It seems you’re only talking about the camera projection transform here. Are you using the correct camera modeling transform too? This is the inverse of the viewing transform.

Hey
well the modelview Matrix is [R|T] where R is the rotation matrix and T is the translation vector. i suspect that the problem reside in the clipping and depth testing or something. since P is obtained from a real world painhole camera. thus it does not follow the opengl rules i guess

If the camera in question really is a pinhole camera (without a lens), it should be straightforward to obtain a consistent result via a perspective projection.

For a camera with a lens, you have three main issues:

Rays may not converge on a point.

Even if rays do converge on a point (or close enough), that point may be behind or in front of the camera.

The lens transformation may be non-linear (more pronounced with a wide-angle lens).

If point #1 is significant, then it’s impossible to obtain a consistent result from forward rendering; you have to use ray-tracing.

Point #2 just requires calibration, e.g. photographing objects of known sizes at known (relative) distances and calculating the distance offset.

Point #3 can be dealt with by applying a 2D transformation to the rendered image as a post-processing step. In polar coordinates, this boils down to transforming the radius while leaving the angle unchanged. In Cartesian coordinates (where the origin is the centre of projection), that means scaling the coordinate vector by a function of its magnitude.