# Doubt in Graphics pipeline

In graphics pipeline after vertex shader comes, primitive assembly-> Clipping to view frustum-> normalized device coordinates -> viewport transformation.

Now in vertex shader we multiply object cordinates by modelview and projection matrix. " The Projection Matrix transforms the vertices in view coordinates into the
canonical view volume (a cube of sides 2 2 2, centered at the origin, and aligned with the 3 coordinate axes). Typically, this will be either by an orthographic projection or a perspective projection. This transform includes multiplication by the projection transformation matrix followed by a normalization
of each vertex, calculated by dividing each vertex by its own w coordinate.
"

Now, if this is done in vertex shader only why it comes after the vertex shader part in pipeline shouldn’t it just be a part of vertex shader.? If not what is the output of projection matrix multiplied by vertex coordinates?

So called “clip-coordinates” are results of projection matrix multiplication with the “eye-coordinates” vector. Clip-coordinates are in range [-w, w]. By dividing with w, NDC (normalized device coordinates) are obtained.

Instead of using italic it would be more useful to cite the source of the statement. The statement is not quite correct.

Are you saying that canonical volume[-1,1] is not result of multiplication with projection matrix?

[QUOTE=debonair;1256319]" The Projection Matrix transforms the vertices in view coordinates into the
canonical view volume (a cube of sides 2 2 2, centered at the origin, and aligned with the 3 coordinate axes).
[/QUOTE]
No. I don’t know where you got this, but this statement is wrong in several ways.

First, the coordinate space you feed into the projection transform is “EYE-SPACE” (sometimes confusingly called view space).

The projection transform transforms positions/vectors from that space into CLIP-SPACE. This is NOT a 3D space where -1 <= X,Y,Z <= 1. It is a 4D space where -W <= X,Y,Z <= W. Clipping is performed in this space.

After clipping is done, then the perspective divide is applied (divide by W), which gives you NDC-SPACE (NDC = Normalized Device Coordinates). This is the -1 <= X,Y,Z <= 1 “cube” you’re referring to.

In summary:

EYE-SPACE -> (PROJECTION TRANSFORM) -> CLIP-SPACE -> (CLIPPING) -> (PERSPECTIVE DIVIDE) -> NDC-SPACE

The vertex shader takes care of applying the projection transform, but GPU takes care of clipping and the perspective divide pieces behind-the-scenes after the vertex shader runs.

Now, if this is done in vertex shader only why it comes after the vertex shader part in pipeline shouldn’t it just be a part of vertex shader.?

No. Think about triangle clipping. Consider a triangle that is partially-in and partially-out of the view frustum. Suppose you did all the above in the vertex shader for each vertex, so you know that 1 vertex of your triangle is in but 2 are out. What does that get you? Not much.

Clipping needs to be applied on the whole triangle, not just on a single vertex, which is part of why it happens after the vertex shader (which only operates on a single vertex). The GPU, operating on the whole triangle, can then rasterize all of the fragments (think pixels) that lie within your triangle which are inside the view frustum.