Short story: How does one model a camera system to view or inspect a static mesh such that it never clips through the near and far projection planes, while also allowing a virtually limitless zooming? It appears to have been done in a 3rd party package called “Eyeshot” (free demos are downloadable for anyone who’s curious).
Long story:
I am the graphics programmer at an aerospace design software company, and we are gradually moving to replace a 3rd party graphics solution,“EyeShot” with our own custom graphics in OpenGL, or OpenTK more specifically (now that they have a dedicated graphics programmer - me).
The graphics engine we need is actually quite rudimentary compared to some game engines I’ve worked on, but EyeShot has an uncanny ability to allow camera controls that, far as I can tell, NEVER result in the model being clipped through the near or far planes of the perspective projection.
I am no stranger to creating camera controls, I am able to implement pan/zoom/rotate capabilities that behave with respect to the mouse’s location on the model, that’s no trouble, but how does one prevent near/far plane clipping from ever happening, while allowing seemingly limitless zooming?
The geometry we are rendering is static, its not being animated or moving around the scene in any way.
Zooming approach 1: Scale model matrix, shift camera position.
My initial approach to zooming in by some scale factor k:
1. deduce the view coordinates, v0, of the portion of the model, p0, that lies beneath the cursor,
2. scale the model-view matrix by k, and adjust the camera translation until the same world position, p0, maps to the same view coordinate v0.
In other words, the thing beneath the mouse does not appear to move, despite the change in scale.
Well that works great but after you zoom in far enough, the scale factor k causes the model to pass through either the near or far boundaries and clip them.
Zooming Approach 2: Screen space scale.
The next idea was to simply scale up the size of the rendered image in screen coordinates.
So I put this at the end of my Vertex Shader for mesh rendering:
gl_Position.x *= gScreenSpaceScale;
gl_Position.y *= gScreenSpaceScale;
what I found was that this worked reasonably well for zooming in real close to things without pushing them outside of the near/far planes, but A) it caused weird visual distortions when zooming out because you see more of the mesh that would otherwise be off the edge of your screen, and B) zooming in this manner somehow creates the illusion that your view is rotating slightly. Something to do with focal points/vanishing points, I suspect.
I was able to combine both approaches such that it begins screen space scaling after you’ve zoomed in beyond a certain threshold. Zooming out using the 1st approach is a non issue since it shrinks the model and keeps it safely within the near/far planes.
It occurs to me that maybe I can mathematically determine the vanishing points/focal points or something of that nature that are creating the illusion of rotation with the screen space scale, and maybe I can adjust the camera angle to counteract that illusion, but that’s as far as I got.
Has anyone found an approach for this, or found any good articles on the matter?