[QUOTE=debonair;1256319]" The Projection Matrix transforms the vertices in view coordinates into the

canonical view volume (a cube of sides 2 2 2, centered at the origin, and aligned with the 3 coordinate axes).[/QUOTE]

No. I don’t know where you got this, but this statement is wrong in several ways.

First, the coordinate space you feed into the projection transform is “EYE-SPACE” (sometimes confusingly called view space).

The projection transform transforms positions/vectors from that space into CLIP-SPACE. This is NOT a 3D space where -1 <= X,Y,Z <= 1. It is a 4D space where -W <= X,Y,Z <= W. Clipping is performed in this space.

After clipping is done, then the perspective divide is applied (divide by W), which gives you NDC-SPACE (NDC = Normalized Device Coordinates). This is the -1 <= X,Y,Z <= 1 “cube” you’re referring to.

In summary:

EYE-SPACE -> (PROJECTION TRANSFORM) -> CLIP-SPACE -> (CLIPPING) -> (PERSPECTIVE DIVIDE) -> NDC-SPACE

The vertex shader takes care of applying the projection transform, but GPU takes care of clipping and the perspective divide pieces behind-the-scenes after the vertex shader runs.

Now, if this is done in vertex shader only why it comes after the vertex shader part in pipeline shouldn’t it just be a part of vertex shader.?

No. Think about triangle clipping. Consider a triangle that is partially-in and partially-out of the view frustum. Suppose you did all the above in the vertex shader for each vertex, so you know that 1 vertex of your triangle is in but 2 are out. What does that get you? Not much.

Clipping needs to be applied on the whole triangle, not just on a single vertex, which is part of why it happens after the vertex shader (which only operates on a single vertex). The GPU, operating on the whole triangle, can then rasterize all of the fragments (think pixels) that lie within your triangle which are inside the view frustum.