Getting OpenGL to render in a specific raster

Hello, this is my first time posting here.

I have data acquired from a 3D reconstruction using multiple cameras. After the reconstruction each pixel is assigned the corresponding 3D coordinate.
What i do now is to import the calibrated cameras into OpenGL. With the 3D data available and the cameras setup i can view the reconstructed scene using OpenGL.

Now to my problem: When i use GL to render the reconstructed scene into the cameras, the image is being slightly distorted. Thats why some of the initial pixels from the 3D reconstruction don’t match the pixels which i obtain by calling glReadPixels() on the rendered scene anymore.

May this be caused by the OpenGL interpolation?

How would it be possible to avoid this kind of a problem?
I read in the OpenGL FAQ 14 14.015 that it is possible to tie pixels to 3D vertices using glDrawPixels() and glRasterPos*().
Would this solve my problem?

Best regards, mike

EDIT: Would it be possible to use a vertex/fragment shader to modify the position of the final pixels? (like with gl_FragCoord?)

I’m not sure that I understand. You render something with GL and then you read it back with glReadPixels and you don’t get the same results as onscreen?

Perhaps you can post screenshots.

What format are you reading back? I suggest GL_BGRA and GL_UNSIGNED_BYTE

Reference ; http://www.opengl.org/wiki/Common_Mistakes#Texture_upload_and_pixel_reads

Ok let me try to be clearer this time.
There are two steps involved with my problem.

First of all i do a 3D multiview reconstruction of a scene. What that does is to take a picture with a real camera and assign every pixel in that camera a 3D coordinate.

The camera is fully calibrated with extrinsic and intrinsic parameters known. With that at hand, i can setup a GL_PROJECTION and GL_MODELVIEW matrix to import the real camera into OpenGL. The imported camera is then used to ‘look’ at the 3D Data which i draw with glVertex().

For that i setup a window with the same size as the resolution of the camera and read the image (the view through the camera) with glReadPixels().

Until now everything works good.

I added a quick illustration of what i try to describe now.
In the upper image there are three pixels marked black. These are the pixels that were seen in step one, during the reconstruction.

Now when i let OpenGL render the same 3D data through the same camera i see the image on the bottom, the three pixels are now on a slightly different position than before, which should not happen.

Sorry for the long post, i hope i have made my question clearer now.

To my question, is it somehow possible to control, where glVertex() data is being rendered to inside the window?
I read about glRasterPos(), gluUnProject() and gluProject() but am not quite sure if these functions are of any help for my problem.

I hope you can help me, best regards, mike

OpenGL 3D projections are limited to purely 4D homogeneous projective coordinates.
Real world cameras often have radial distortion and other ‘interesting cases’ : http://en.wikipedia.org/wiki/Distortion_%28optics%29
Most probably the difference come from this.

Make sure you account for that yourself by doing the 3D reconstructed coords to 2D screen coords in your code, then directly draw your 2D points on a GL window with something like this :
http://wiki.allegro.cc/index.php?title=2D_using_OpenGL

Yes one problem may be a radial distortion to some degree and another reason is surely the interpolation used by OpenGL.

I will read the article you posted, thanks.