Mapping to a 3D space

I’m using the Microsoft Kinect to try and display a “3D” view.

From the Kinect, I have a RGB stream and Depth stream. Where the values in the depth stream correspond to how close or how far the corresponding pixel is.

Given these two sets of data, I want to map the RGB pixels to another space where the pixels are placed according to the depth data.

This has been done by Oliver Kreylos as seen in

I was wondering how to go about and do this?

I haven’t use OpenGL before so I was wondering if I should use OpenGL for this?