I’m trying to solve a problems which I’m pretty sure can be done. I’m using deferred shading, and have all the positions of my scene in eye space. The problem is I have a photograph taken (looking down a street), where I want to embed this into the final color/material buffer, basically an overlay ontop of the depth/positions. This texture should line up with some calibration points I have. So far I have not had success since it seems my pixels are two large on the bottom and not large enough on the top(the image already has perspective in it…hmmm). I was hoping I could just render this to a quad (well, the endpoints multiplied by a perspective matrix), and then somehow have this quad result get embedded into the final buffer. I do understand that since the quad is not linear, I would need to use the Q coordinate. I have not done this yet…but I don’t think that that particular artifact is the only issue).
Looking for a change of mindset or guidance.
Real world camera calibration is tricky but very important to do before trying to mix real and virtual shots.
OpenCV has a lot of tools to do that, either manually or semi-automatically by filming a checkerboard pattern (best glued to rigid cardboard) :
‘Intrisincs’ define the internal camera params, such as barrel distorsion, pixel aspect ratio, off-center axis, etc.
‘Extrinsics’ define the external params, such as position/rotation relative to scene.