I am implementing mouse picking in my 3D Chess App. Using Vincent and UG presently. I load my models into my world, and have a camera set up using UG’s Lookat fn.
I want to be able to translate screen coords into 3D coords. The way I see it, If I know where my camera eye is, and I know where I tapped on the screen (in approx x,y,z coords), I can trace using a line equation, doing a test between my traced line and the bounding boxes of my models.
Will this work? If so, how do I get the homogeneous world coords from the x,y coords of the screen???