Using OpenCV homography matrix in OpenGL as view matrix and projection matrix

I’ve been struggeling for a while now with my project: I’m calibrating projections on a table that is combined with computer vision. I’m using OpenCV for the input and OpenGL (3 and shaders, glsl 1.30) for the projection.

I’ve been reading a lot about it, while it’s are really detailed, somehow it’s not working out for me. The best I see is a white line (the side of a plane?), while projecting one white square. When playing around with the values of the view and projection matrix, I am able to see my white square. However, my goal is to project on a specific surface (with the corner points as accurate as possible), therefore I need the calibration.

My process at the moment:

[li] My camera is calibrated with OpenCv’s calibrateCamera(), I’m using the output (intrinsic) camera matrix in further steps.
[/li][li] With some basic view and projection matrices, I am projecting a chessboard on my surface. By trial and error I managed to get the projection on the surface. But since I want the corners of my projected square to be on the corners of my surface, I’ll calibrate the projections.
[/li][li] For the projection calibration I am using OpenCV’s findChessboardCorners(). I calculate where I expected the corners to be on the surface, I convert both sets of points to a normalized coodinate system (-1 -> 1 on both axis) and with the two vectors of normalized points I calulate the homography (findHomography()).

From this point I had two different appoaches, which are both failing me. For one I was following Kronick and for my current sollution I followan example that I’ve seen a lot, using DecomposeProjectionMatrix(). For both I use the camera matrix and the homography matrix.

With using decomposeProjectionMatrix I get to see my square the normal way, but much larger than I expected. First the opencv projection matrix is created (based on the homography matrix), by adding a colomn.

cv::Mat P(3, 4, cv::DataType<float>::type);<float>(0, 0) =<float>(0, 0);<float>(1, 0) =<float>(1, 0);<float>(2, 0) =<float>(2, 0);<float>(0, 1) =<float>(0, 1);<float>(1, 1) =<float>(1, 1);<float>(2, 1) =<float>(2, 1);<float>(0, 2) = 0;<float>(1, 2) = 0;<float>(2, 2) = 1;<float>(0, 3) =<float>(0, 2);<float>(1, 3) =<float>(1, 2);<float>(2, 3) =<float>(2, 2);

std::cout << "P: " << P << std::endl;

// Decompose the projection matrix into:
cv::Mat K(3, 3, cv::DataType<float>::type); // intrinsic parameter matrix
cv::Mat R(3, 3, cv::DataType<float>::type); // rotation matrix
cv::Mat T(4, 1, cv::DataType<float>::type); // translation vector
cv::decomposeProjectionMatrix(P, K, R, T);

cv::Mat T2;
T2 = Mat::eye(4, 4, cv::DataType<float>::type); // translation matrix<float>(3, 0) =<float>(0, 0) /<float>(3, 0);<float>(3, 1) =<float>(1, 0) /<float>(3, 0);<float>(3, 2) =<float>(2, 0) /<float>(3, 0); //divided by w 

cv::Mat R2;
R2 = Mat::eye(4, 4, cv::DataType<float>::type); // rotation matrix<float>(0, 0) =<float>(0, 0);<float>(0, 1) =<float>(0, 1);<float>(0, 2) =<float>(0, 2);<float>(1, 0) =<float>(1, 0);<float>(1, 1) =<float>(1, 1);<float>(1, 2) =<float>(1, 2);<float>(2, 0) =<float>(2, 0);<float>(2, 1) =<float>(2, 1);<float>(2, 2) =<float>(2, 2);

Mat view = R2*T2;

view_mat.m[0] =<float>(0, 0);
view_mat.m[1] =<float>(0, 1);
view_mat.m[2] =<float>(0, 2);
view_mat.m[3] =<float>(0, 3);

view_mat.m[4] =<float>(1, 0);
view_mat.m[5] =<float>(1, 1);
view_mat.m[6] =<float>(1, 2);
view_mat.m[7] =<float>(1, 3);

view_mat.m[8] =<float>(2, 0);
view_mat.m[9] =<float>(2, 1);
view_mat.m[10] =<float>(2, 2);
view_mat.m[11] =<float>(2, 3);

view_mat.m[12] =<float>(3, 0);
view_mat.m[13] =<float>(3, 1);
view_mat.m[14] =<float>(3, 2);
view_mat.m[15] =<float>(3, 3);

The resulting view_mat.m (it is a float view_matrix[]) is used in the shaders.
What am I missing? I hope someone can help me out with this.

Thank you!