Why distance from camera only works with square window?

I thought it would be best to do a new posting (see the following posting for background).
As I explained in the previous posting I have been trying to generate an image (in reality a 2d array) with the distance from the camera location (origin of the viewspace) to each visibile component of my scene.
I had done this before and I was just trying to reproduce this result. I was going a bit crazy trying to figure out why I was not getting the same results, now I know what is causing the difference but I am unable to explain why so I am hoping someone will be able to shed some light.

Here are the main points.
I was using the following vertex and fragment shader to render the distance away from the camera location,

#version 330 core
layout (location = 0) in vec3 aPos;

uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

out vec4 pos;
void main()
    pos = view * model * vec4(aPos, 1.0);   
    gl_Position = projection * view * model * vec4(aPos, 1.0);
#version 330 core

//layout(location = 0) out float depth; // first entry in glDrawBuffers

in vec4 pos;
out float depth;

void main()
    depth = length(pos.xyz);

After rendering to a color attachment in a framebuffer, I got the following,

My expectation was to get the same output I got when rendering these cubes normally except that the image would contain distance from camera. Instead I got the above. FYI Zoom is set to 45. The image is 800 width and 600 height (that is another thing I have yet to figure out, why using glReadPixels(0,0,800,600,...) I get something that has the dimensions swapped).

If I keep the same window dimensions and increase the zoom of the camera to 90. I get the following result,

It is only when I make the window square that I get the result I was expecting from the start,

Here is the same with a zoom = 45,


The question is why? Why does it give me what I was expecting only when window is square?

I answered here: Repetitions when rendering distance to eyepoint - #4 by nimelord

Hi @nimelord Unless I am missing something, I do not see how the answer you provided answers why I am getting the expected results with a square windows? :worried:

You use height as a width and width as a height. Only case if them are equal does not produce a problem. just replace their positions and You can produce correct images of any resolution.
Read specification of glReadPixels() function carefully especially place where described order of parameters.

Hi @nimelord I understand that I might have to swap width with height when using glReadPixels due to the fact that I am uning pyOpengl (see GClements reponse here). But regardless of whether I swap them the result I get is similar to the one shown in the first two screens and not the one expected once I use any square window, i.e. width = height).

I tried to explain that from the beginning.
Unfortunately my tries to help is useless for you, i caught it.

AFAICT, this question is completely academic until you fix the bug(s) in your rendering:

Only then does it make sense to consider reading back the results and trying to make some sense of them.