Rendering an array of depths to a color texture

Hi,

As a beginner in OpenGL (ES) I’ve come across a problem that could be considered uncommon.
So, I registered hoping someone can get me on the right track.

I’m developing a mobile app that uses a depth sensor (think Kinect, but portable).
These depths are stored in an array of floats (single dimension).
Each float represents the depth in millimeters.
The array represents a depth texture synced with the color camera texture.
The dimension of these textures is 640x480; 640 in width, and 480 in height.
The array stores depths row by row. So the first 640 values are depths for row 1, and so on.

Now, to use these depths in an OpenGL shader I thought of these options:

  • translate the depths to a texture by shifting the float value to fit the RGBA data array, and ‘translate’ the rgba back to a depth value by overloading the rgba values (done this successfully)
  • translate the depths to a texture using an offscreen framebuffer by rendering a number points (depth array’s length) in a viewport that equals the depths

Quick explanation of option one:
I loop over the array, bit shift each float value to fit the RGBA space (4 x unsigned bytes) and in my shader, I overload the RGBA values to come back to the depth value as a float.

Now the real problem, option two.
For this I use two Vertex Buffers.
One for the depths, and one for the depth coordinates.

The coordinates are calculated as follows:


float[] depthCoordinates = new float[640 * 480 * 2];
int coordinateIndex = 0;
double xStep = 1d / 320; // 0.003125
double yStep = 1d / 240; // 0.00416666666666666666667
double y = 1d;
double x = 0d;

y = y - (yStep / 2d);
for (int row = 0; row < 480; row++) {
	x = -1d + (xStep / 2d);
	for (int column = 0; column < 640; column++) {
		depthCoordinates[coordinateIndex] = (float)x;
		depthCoordinates[coordinateIndex + 1] = (float)y;
		coordinateIndex = coordinateIndex + 2;

		x = x + xStep;
	}

	y = y - yStep;
}

This loop spits out an array of coordinates that are at the center of a pixel.
However, when I use them, I always have a ‘clamp to edge’ border in the I, III, and IV quadrant.
(In qI the ‘clamp border’ is on the right, in qIII the bottom, and in qIV bottom and right.)
I know this to be an issue with my coordinates, because if I ‘oversize’ (by dividing with smaller value) my step (x and y) I don’t have the clamp problem.

Does anyone see the error in my code, or flaw in my logic?
Also, feel free to suggest (better) alternatives to get the depths available in the shader.
With my limited knowledge, these are the only two options I could come up with.

Kind regards,
Thomas