usually, if the window size changes, you do two things to adjust the OpenGL rendering:
- Call glViewport() with the new client area size,
- set up a new projection matrix, e.g. using gluPerspective().
With this standard approach, the display of the rendering changes differently depending on whether you change the width or height of the window. Download e.g.
and start it in non-fullscreen mode, and shrink the window vertically. You’ll notice that the rendered cube will be displayed smaller in order to still fit into the viewport. On the other side, if you shrink the width, the cube does not change its screen space size, and if the width is small enough, the cube will be cropped.
My question is, how do I need to change the matrix produced by gluPerspective() in order to get the same behavior for shrinking the window width as for shrinking the window height?
By accident, I found out that changing the -1 in the 3rd column, 4th row of the matrix as shown here
to S where
if (A<1.0) S=1.0/A;
seems to do the trick, but I have no explanation why this works. So my second question is, if the above is the right thing to do, why does it work?