I’m using openCV to make some face recognition thing. I have to warp a lot of face textures from image to aligned shape to use them. So instead of using OpenCV warping functions, I decided to use OpenGL. The result is really faster ( 150ms -> 3ms / texture).
I have 2 types of texture to consider :
- RGB textures from jpg images. No problem here.
- Floating point textures, and i have to align it so I have some 0 mean and normalized matrix. When I warp these one the values are not correct. All negative values goes to 0.01 and even positive values seems to be clamped.
I’m absolutely sure that it’s because of my format. First I thought that :
glTexImage2D(GL_TEXTURE_2D, 0, (I don't need mipmap) GL_R32F, (1 channel, floating point) size, size, 0, GL_R32F, (1 channel, floating pint) GL_FLOAT, (float) ptr to data);
must work and it still the most logic way for me. But I only have 0 in the texture. I’m sure that elem format in my original matrix are sizeof(float).
The only thing I found to have some values in my texture is :
ptr to data);
But values are false, so it’s not a solution… To get warped image, I use :
glReadPixels(0, 0, size, size, GL_RED, GL_FLOAT, ptr to matrix);
What am I missing in the format ?
I thought that GL_RED should convert my data in RGBA format with 1 for alpha component, and 0 for other components. It could explain the clamping thing. But why does GL_R32F don’t give anything ? Each value use 32bits, and is floating point…
The initialization for pixel storage & co is done by glewInit().