How can I account for a pincushion distortion to properly draw a pan/tilt (or azimuth/elevation) point on my screen?
Let me explain the purpose first. I am trying to predict the location of the Sun in my camera view for the next few minutes/hours. Basically I want to show the path the sun will move across my camera. Right now I’m trying to first test if I can simply draw a point/circle/etc in the current sun location - if I can do that then I can add the points for all subsequent predicted locations too.
I am NOT using Shaders - I’m still developing using the old pipeline so I can’t take advantage of shaders. I use the gluPerspective() to set a camera view based on the desired fields-of-view (vertical/horizontal), and I want to overlay a point on the surface of the view (using 2D positioning) that represents where the Sun is calculated to be in that view.
I know my pan/tilt (or azimuth/elevation) calculations are correct, because I can verify the Sun location if I literally pan/tilt the camera to the desired values and then I see my Sun directly in the middle. But if I leave Pan/Tilt at 0/0 and instead want to draw a dot/point at the pan/tilt location, it tends to be off the further as I move further away from the center. It is most off at the corners. Initially I was trying to simply position the point based on taking the pan value and dividing by the total horizontal field-of-view value and then positioning the point on the screen at that location - same for tilt. Unfortunately, that is not correct given that pan/tilt are really coupled so I thought there might be some mapping I could use to get the correct x,y placement on the screen, but I am having no luck finding it. Do you have any suggestions?
Here is what my camera sees and where my simple calculations were placing the pan/tilt location (yellow dot near the sun).
Here is an example picture of what the pincushion distortion looks like - and I believe this is what I am experiencing.
I wish I could find a mapping of this distortion but it seems like this mapping involves knowing something about the lens, and I know nothing about the OpenGL/gluPerspective lens that is being modeled. (see link for a short description of the mapping)
I thought maybe there was a way to extract some view matrix from OpenGl that I could use to multiply my pan/tilt values to then have them transformed into the camera view but I’m not experienced enough to know if that even works.
My final alternative solution for this is to simply draw the predicted sun positions in the 3D view (i.e. draw more Suns at each of the future predicted points) and not worry about overlaying the position in the 2D screen…but that doesn’t seem as elegant and I already can calculates the pan/tilt current and predicted points so why not use the 2D view to draw those positions - but this pincushion distortion has me scratching my head so any help would be appreciated.