Graphics pipeline openGL

#1

I am looking for recent documentation on the modern opengl graphics pipeline and workflow and find no topics more recent than 2015. Can someone point me to an online resource that is up-to-date (2018 or later) please?

#2

PS: I replaced “graphics” with “rendering” in my search and located https://www.khronos.org/opengl/wiki/Rendering_Pipeline_Overview
Is this the recommended goto place for my query?

#3

That’s a decent pipeline overview (it’s both high-level and vendor-agnostic).

If you’re looking for more details in some specific area, just follow-up with what information you’re looking for.

#4

Thanks, the first thing I want to understand more clearly relates to where the perspective and viewing transformations occur. The pipeline just says “perspective divide” but, referring back to old textbook documentation (Foley and Van Dam, section 9.10,911, with references to chapter 8), in that example vertices were processed in a series of viewing transformations, as follows:

  1. “World coordinates” are normalised into range -1 to 1 for clipping.
  2. Clipping against view frustum (canonical view)
  3. Projection onto view plane and transform into viewport in NDC
  4. 3D image transformation in NDC
  5. Orthographic projection from 3D NDC to 2D NDC
  6. Transform into physical device coordinates

The pipeline assumes modelling transformations, for placing and instancing objects, has already been done.
As I understand it, the projection in 3. above is not just a perspective projection but aligns the view position with origin and aligns the view axes with “screen space” x, y, z, and applies scaling as well. So, my specific question is: does the “perspective divide” refer to the process of doing all these operations?

#5

First, you got the order of operations wrong. Clipping happens before normalization, “world coordinates” are never normalized (unless you have a very odd definition of “world”), the transform into NDC is “normalization” (NDC stands for normalized device coordinates), NDC space is always three-dimensional.

Second, OpenGL nowadays only handles clipping and the transforms that happen after that. You are responsible for doing all the stuff before them with your Vertex Processing shader stages.

Also:

aligns the view position with origin and aligns the view axes with “screen space” x, y, z, and applies scaling as well.

That happens before clipping. Indeed, even in OpenGL 1.1, that happened before clipping. It’s been a while since I’ve looked at Foley, but that’s never been how OpenGL worked.

The “perspective divide” is exactly that: division of the clip-space XYZ by the clip-space W. That’s what transforms clip space into NDC space.

#6

I have found an article that partially answers my question here:
https://learnopengl.com/Getting-started/Coordinate-Systems
In a section on View space it says:
"The view space is the result of transforming your world-space coordinates to coordinates that are in front of the user’s view. … This is usually accomplished with a combination of translations and rotations to translate/rotate the scene so that certain items are transformed to the front of the camera. "
Then in the next section on clip space:
“To transform vertex coordinates from view to clip-space we define a so called projection matrix … The projection matrix then transforms coordinates within this specified range to normalized device coordinates ( -1.0 , 1.0 ).”
I just need an idea of where these map to the stages in https://www.khronos.org/opengl/wiki/Rendering_Pipeline_Overview.

#7

All of those stages are up to you. You can perform those transforms, or do something else entirely. The tutorial you link to (badly) explains this. Unfortunately, the first paragraph is very, very wrong about “OpenGL expects all the vertices, that we want to become visible, to be in normalized device coordinates after each vertex shader run.”. They will be in clip-space; NDC space is clip-space divided by the clip-space W.

But after making that substitution, the tutorial is correct: the vertex shader (or other vertex processing stages) produces coordinates in clip-space. How you accomplish this is entirely up to you. You may, or may not, use model-space, view-space, world-space or whatever else. How you get from one space to another is again up to you.

#8

ok, well I suppose my expectations were a little more high level as far as support for viewing transformations are concerned.