I am trying to write a simple application which overlays on a photograph ‘markers’ which highlight certain features. For example, say you where standing on a hill and took a photograph of the skyline - I would like to “mark” points of interest such as buildings etc.

Therefore, I am not writing a 3D graphics application - i’m just trying to get a 3D projection of the landscape in the photograph to be able to accurately annotate it.

To achieve this I know i’ll need this information (which I have access to):

Camera: GPS Coordinates, Elevation, Compass Bearing and Tilt (up and down)
Points of interest: GPS Coordinates, Elevation

I am able to work out elements such as distance (in metres) and the angle from the coordinates the photograph was taken to the coordinates of the point of interest I wish to “mark”, so in theory I have all the information I need to accurately annotate the photograph.

However…

I have no idea how I go about using 3D projection, creating a model of the world and converting it to screen pixels using OpenGL. I posted a similar message on another forum, whose members suggested using OpenGL to “plug in” the real world x,y,z coordinates and allow OpenGL to return the x,y screen pixels.

So before I spend another day hunting around for a tutorial on how to do this could someone kindly point me in the right direction? Any help and advice which will help me understand this would be greatly appreciated.

Oh and I will need to use the OpenGL ES 1.0 specification to create this if that makes any difference.

If i get you right then all you have to do is match the projection matrix with the photograph and then just feed the actual rellative coordinates of the PoE trough the matrix.
Technically you won’t need openGL for that.

Okay, I didnt really understand much of that, I really am a beginner when it comes to anything 3D. For example, what is the projection matrix and the PoE?

Will I have to calculate the distance to the horizon in the photograph to be able to plot accurately the ‘relative coordinates’ in the foreground?

If anyone has a good beginner tutorial to explain what I need to understand, or be willing to explain it - i’d be very greatful!

Thanks for those links i’ll try and decipher what I need to learn from them - its all proving a little confusing at present, however I shall persevere and try to undertand it.

Ilian, the trouble is I do not know the X:Y coordinates of the dot on the screen where a point of interest i.e. a building is displayed - this is what I am trying to calculate. All I have is the x,y,z coordinates of the points of interest and the x,y,z coordinates, compass bearing and tilt of the camera. Note: the coordinates exist in the form of GPS coordinates (X:Y) and a elevation above sea level value (Z), I need to convert these into coordinates for a model…

What I need to learn is how to create a 3D model with this data and then calculate from the 3D model the X:Y coordinates on the screen where each point of interest is displayed. I.e. The pixel X=200, Y=124 on the screen is the location of the building.

Will I need OpenGL to be able to do this or is it a more straight forward problem?

Mathematically speaking, it is just :
posXYZ * ModelViewMatrix * ProjectionMatrix = screenposXY

You know posXYZ, and want screenposXY.
You have to fill the ModelViewMatrix with correct values depending on the xyz+ bearing and tilt of camera.
You have to fill the ProjectionMatrix with values depending on the camera projection, such as horizontal and vertical field of view, and the scale you want to work with (max xy values at the edge of the photograph, whether center is at 0,0 or not, etc)

Its getting more clear what I need to do now. I’ll have to read up on ModelViewMatrix and ProjectionMatrix, then hopefully it’ll be easier to understand what needs to be done.

Does anyone know of any good books/tutorial/sample code etc which may help? Currently I have no idea how to set scale i.e. meters between camera and points and setting angle/tilt of camera etc.

If you feel really lost, you may find it easier to play with http://processing.org drawing colored triangle and doing transformations (there is a simple bridge to opengl), so that you see easily the results.
Try doing what you want with the basic rotate/translate + gluPerpective.

Scale : it is your choice
Angle/tilt of camera : use glRotate, one for each rotation axis
glRotatef(aroundz,0,0,1);
glRotatef(aroundx,1,0,0);
glRotatef(aroundy,0,1,0);

Okay, I have struggled with this and haven’t got very far. I have realised that OpenGL is intended for projection graphical models in perspective rather than be using to calculate the figures i’m after. I’ll have to try and find another solution to this problem.

Its a problem, that I cant find any good examples/tutorials or anything which describes what i’m trying to implement though.

Is this OpenGL related? No. It isn’t. This board is neither about Windows Phone, nor about some compass app tailored to this particular platform. Ask the app vendor.