Normalized device coordinates for world coordinates using orthographic projection

Hi there,

let’s say I have world coordinates stored in a variable Position and I want to calculate
the Normalized Device Coordinates (NDC) for this Position on the CPU in C++ code.

Using a ModelViewProjectionMatrix which contains a perspective projection matrix I would do it like this:

// Calculate clip space position
s_Position= ModelViewProjectionMatrix * glm::vec4(Position[0], Position[1], Position[2], 1.0f);

// Transform from clip space to normalized device coordiantes by doing the "perspective divide" [-1,1]
// See here: http://www.songho.ca/opengl/gl_transform.html
s_Position.x= s_Position.x / s_Position.w;
s_Position.y= s_Position.y / s_Position.w;
s_Position.z= s_Position.z / s_Position.w;

// Calculate window depth coordinate which would be the value written to the depth buffer [0,1]
s_Position.z= s_Position.z * 0.5f + 0.5f;

This should give me s_Position’s x and y in a range [-1,1] and z in a range [0,1].
I hope that is correct so far.

Now I want to achieve the same thing but with a ModelViewProjectionMatrix which contains
an orthographic projection matrix.

But in this case w is always 1 so there is no “perpsective-divide”.

So which step is performed between vertex and fragment shader to
get the NDC and how can I achieve the same result on CPU side?

Help is really appreciated!

It is the exact same math. Dividing by 1 is legal math; it simply does nothing.

I just noticed that I did a mistake in my high level code when switching to orthographic projection.
The result was that my debug output showed me not normalized values in s_Position and I started
to debug this “perspective-divide” stuff with w = 1 and so on. Sorry as you said it works fine.