Question on multipass rendering: Recalc or store?

Just curious for some input before I take my first stab at this:

I have to use a multi pass rendering technique. I can obtain all the data I need to light visible screen coord pixels after the first render pass. However to do this, I need the X,Y,Z coord of each pixel that makes it to the final render screen.

I can fathom two appropriate approaches for this:
1: Make an extra buffer that stores the X,Y,Z, coord of the fragements that make it past the depth buffer in a seperate buffer.
2: Save the vertex rendered information in a Transform FeedBack buffer and just re-render as needed for fragment rendering.

Here’s what I’d like help with: Advantages / Disadvantages of both, and any questions I may have.
EXTRA BUFFER:
Pros:
-Extra storage, a buffer of 1600x900 resolution with a float for each coord is 12 bytes per fragment, so for that resolution it’s a 1.44 meg buffer file. This is made/accessed in parallel in the GPU so it shouldn’t be a big issue right? What are the read/write downfalls of this approach
-Only deal with fragments that make it to the screen. Fragement calculations and index lookup can be indexed by interpolating vertex coords from a simple square with the screen height and width, no dealing with frags that won’t make it to the end product.
Cons:
-Can’t deal with fragements that didn’t make it past the initial rendering stage.

  • extra storage buffer

Transform Feedback buffer
Pros:

  • can deal with a broader range of data for more sampling due to reprocessing the fragments for each pass, instead of just those making it to the screen
  • No extra XYZ storage buffer
    Cons:
    -Extra processing time for multiple fragments:

Would love some feedback & opinions.

I can fathom two appropriate approaches for this:

3: Render the objects again. This is how multipass is traditionally done.

4: Use deferred rendering.

  • No extra XYZ storage buffer

Where is your transform feedback buffer then? You can’t overwrite the source vertex data with the post-vertex-shader data. You’re going to have to allocate some buffer object memory to store this data in.

Plus, you can’t do transform feedback at the same time you do regular rendering.

Well, I wouldn’t need a transform feedback buffer if I just stored the X,Y,Z coords of screen drawn fragements under option #1. (one buffer would have RGBA value, the next buffer would contain X,Y,Z coords for relative fragments draw to screen)

Option #2 uses a Feedback buffer to regenerate the same scene only without the vertex stage (which is stored in the feedback buffer) the perk here is you get all the fragment data again and you don’t have to store data for each fragment that makes it to the screen in a buffer.

Your option #3 is outdated by the introdution of Feedback Buffers correct?

And option 4 is what we’re technically trying to do here in general I believe.

the perk here is you get all the fragment data again and you don’t have to store data for each fragment that makes it to the screen in a buffer.

But you do have to store potentially more data.

A 1920x1080 framebuffer has about 2 million pixels. If you render more than 2 million vertices, then the space needed for your feedback buffers is greater than that. So it’s not always a perk.

Your option #3 is outdated by the introdution of Feedback Buffers correct?

No.

Feedback buffers aren’t free. If your vertex shader doesn’t do a whole lot, then your pre-rasterization time is going to be based primarily on vertex transfer speed. A well-optimized indexed list of triangles, which compresses vertex attributes appropriately, can be quite fast to draw. Transform feedback only works in 32-bit floats, which is far from a well-optimized format.

And there are limitations on them (the number of stored per-vertex values) that make using them potentially impossible.

And option 4 is what we’re technically trying to do here in general I believe.

In general, there is no need to store position data when working with deferred rendering. You can reconstitute the position by using the window-space fragment coordinate of the deferred passes. Combined with the depth buffer value to get the window-space Z coordinate (and possibly the 1/z coordinate, which you would have to store and retrieve. But that’s one value rather than 3), you can transform from window space to camera space or whatever space you need the positions to be in.

The last statement about recalculating the window position to 3d position interests me Alfonse, currently doing some research on it. Let me pick your brain a little more.

Going to throw out a few quick questions before I dig to deeply.

Standard vector transformation in the vector shader typically involves transforming the vertex to the appropriate position based on some Euler, Quaternion, or Matrix function, and then typically a projection transformation is applied to the resulting position for screen window coords.

I’ll use a matrix notation to try and clarify my intent even though I don’t use matrices.

Just getting the transformed ModelMatrix * ViewMatrix (aka ModelViewMatrix) position would be fine enough for stepping back, this is one less step backwards to the origional X,Y,Z coord. So really all I need to undo is the Projection Matrix and the Z component transformation. (The project part taking place in the vertex shader, and the Z component between the vertex and fragment shader, more on this later.)

So lets say with the standard Projection Matrix
[Proj[0],0 ,0 ,0 ]
[0 ,Proj[1],0 ,0 ]
[0 ,0 ,Proj[3],-1]
[0 ,0 ,Proj[4],0 ]

Your transformation of the properly rotated vector (after the ModelViewMatrix) to projection corrected coords =
gl_Position[0] = gl_Position[0] * proj[0];
gl_Position[1] = gl_Position[1] * proj[1];;
gl_Position[3] = -gl_Position[2];
gl_Position[2] = gl_Position[2] * proj[2] + proj[3];
(This is verified to work, it’s similar to what I use)

So to undo the X and Y coords from the window coords it’s as simple as
gl_Position[0] = gl_Position[0] * (1/proj[0]);
gl_Position[1] = gl_Position[1] * (1/proj[1]);

Undoing the Z coord is
gl_Position[2] = (gl_Position[2] - proj[3]) * (1/proj[2]);
but I know this doesn’t equate to the Z value in the fragment shader.

However, I know the Z coord is transformed to the fragment shader by some formula to make it so that coords closer to the screen are more accurate. This is really the only part I’d have trouble transforming back.

Hunting around for information on undoing the depth coord right now.

NOTE: THIS IS IN TESTING< NOT CONFIRMED TO WORK
– currently looking for the formula GLSL uses to calculate gl_FragCoord off of a clip coord.

Searching around I found a lot of info on turning the coords back window coords using gluUnproject. Unfortunately this is a little broad and utility dependent. So I’m going to use this post as a draw space and future reference for anyone else that is interested in reverse projecting from view coords.

This will transform the window coords of gl_FragCoord.xyz to the popper environment vertex.

The best reference I found was the formula used by gluUnproject:

MV = ModelViewMatrix (ModelMatrix * View Matrix)
P = Projection matrix
vert =
typedef struct{x,y,z,w}vert;
vec4 view (or float view[4] or GLfloat view[4] if you’re in C++ not GLSL)
view specifies our glViewport where you get view from a glGetIntegerv(GL_VIEWPORT) call, which returns four values, the x and y window coordinates of teh viewport and its width an dheight.

where:
vert.x = inverse(MVP) * (2(gl_FragCoord.x - view[0])/view[2])-1
vert.y = inverse(MVP) * (2(gl_FragCoord.y-view[1])/view[3])-1
vert.z = inverse(MVP) * (2gl_FragCoord.z)-1
vert.w = 1

given the standard projection matrix
[Proj[0],0 ,0 ,0 ]
[0 ,Proj[1],0 ,0 ]
[0 ,0 ,Proj[3],-1]
[0 ,0 ,Proj[4],0 ]

you can derive

vert.x = inverse(MV) * ((2*(gl_FragCoord.x - view[0])/view[2])-1) * (1/proj[0])
vert.y = inverse(MV) * ((2*(gl_FragCoord.y-view[1])/view[3])-1) * (1/proj[1])
vert.z = inverse(MV) * (((2*gl_FragCoord.z)-1) - proj[3]) * (1/proj[2])
vert.w = -vert.z

Clip space coords to window coords, and window coords to clip space coords. Most references point to the GluProject and Gluunproject code points.
note: if someone wants to confirm the positioning of coords for me in regards to the center of a pixel and GL window range in relation to pixels that would be great.

this post will cover-
Clip Space to Window Coords and
Window Coords to Clip Space

references to this form nehe lesson 14, gluUnProject, and gluProject.

CLIP SPACE TO WINDOW COORDS
it’s important to note that clip space x,y,z,w coords are defined as the values of the in_Vertex after they’e gone through your ModelView and Projection Matrices, or Quaternions, or Euler Angle transformations, whatever method you’re using. Not your object space coords. OpenGL coords start at 0,0 for the bottom left corner, or more specifically, -0.5,-0.5, since a pixel is denoted as a 1.0x1.0 pixel box, with 0.5,0.5 being its center.

The formulas for Clip Space to Window Coords are as follows:

gl_FragCoord.x =
(viewport.width * (normalized(gl_Position.x) + 1 ) / 2) + viewport.xposition;
gl_FragCoord.y =
(viewport.height * (normalized(gl_Position.y) + 1 ) / 2) + viewport.yposition;
gl_FragCoord.z = (normalized(gl_Position.z) + 1) / 2;

A quick note on what it means to be ‘normalized’ normalized is simply taking the Clip coordinates (gl_Position) and multiplying it by one over the w value of the Clip coordinate.
so the normalized value of;
normalized(gl_Position.x) =
gl_Position.x * (1 / gl_Position.w); or
gl_Position.x / gl_Position.w;
normalized(gl_Position.y) =
gl_Position.y * (1 / gl_Position.w); or
gl_Position.y / gl_Position.w;
normalized(gl_Position.z) =
gl_Position.z * (1 / gl_Position.w); or
gl_Position.z / gl_Position.w;

As a breakdown of what’s happening:
normalizing more or less reduces the dependency on a set pixel resolution. a ‘normalized’ value of .x more or less says “This vertex is Some-Perecent (0% to 100%, or 0.0 to 1.0)of the way across the screen” instead of saying ‘832 pixels from the left side’ because different resolutions could have different total pixels across.

However this value denotes as if the x and y axis was at the lower left corner. if we drew anything below the x or y axis it would be off screen! We need to move our point to all positive coords to match the window. we do this by adding + 1 to our normalized value. This means values from -1.0 to 0 are now 0.0 to 1.0, and 0.0 to 1.0 values are now 1.0 to 2.0 values. all positive. Unfortunately this means we’re calculating for two screens worth of space (200% of the width!) so to bring it back down to one screen (100%) we divide by two. This gives negative axis screen coords the range 0.0-0.5 and positive axis screen coords 0.5 to 1.0 range.

Once we know what percent from the left side we are, we just have to times that by what we know as the viewport width, this officially takes us to window coordinates. Take your normalized value and times it by the window width.

We’re almost done, if the viewport does NOT start at the leftmost or bottom most side of our screen, we simply have to add where it starts to our end value, like so:
gl_FragCoord.x = windowCoord.x + viewport.x;
gl_FragCoord.y = windowCoord.y + viewport.y;

Lastly, OpenGL modifies the clipspace z coord (gl_Position.z) to provide better depth accuracy at closer range.
the formula is simply:
gl_FragCoord.z = (gl_Position.z + 1) / 2;

WINDOW COORDS TO CLIP SPACE
It can be very very handy to be able to calculate the Clip Space from Window coords. Once you get clip space you can unproject and transform it to get model space coordinates. It’s very handy for lighting calculations, or figuring out where you clicked the mouse, or anything what would requires you to know the X,Y,Z position of a pixel on the screen.

if you know how to get your desired window coordinate and z depth from your application, skip over this next part, I’ll briefly go over how to get it in windows.

Getting GL Window Coords from Windows with the mouse
First thing, we want to get the coordinates under our mouse cursor. Under windows we’ll need a POINT variable to do this.
static POINT mouse;
We’ll need some spots to store our points too.
GLFloat pointX, pointY, pointZ;
next we need to get the Cursor Position, this is done with function call
GetCursorPos(&mouse);

There’s only one problem here, Windows mouse positions start in the upper left, OpenGL coords start in the lower left. To fix our Input we can do one of two things:
ScreenToClient(HWND, &Mouse); (where HWND is your window handle)
or if you know the window height, just take the height minus your mouse.y coord. (so yValue = height - yValue; once you get it out of POINT mouse.y)

lets take the values out of the POINT structure.
pointX = (float)mouse.x;
pointY = (float)mouse.y;

grats, you now have your X and Y values, we still need the depth value however. This is stored in the depth buffer of our OpenGL buffer.
to get it we call:
glReadPixels(pointX, pointY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &pointZ);

Window Coords to Clip Space
Now that we know the coordinates that we want to change to clip space, and have our pointX, pointY, and pointZ values we can start reversing the clipspace to window coordinate values.

We also need to know some information about the viewport, we can get what we need with:
static GLint view[4];
glGetIntegerv(GL_VIEWPORT, view);
this loads window offsetX into view[0];
window offsetY into view[1]; (remember, offset from the bottom, not top)
window width into view[2];
window height into view[3];

We need to modify our input first since the center of a pixel is not pointX, pointY, but rather pointX - 0.5 and pointY - 0.5. so:
pointX -= 0.5;
pointY -= 0.5;

Our first goal is to get back to the normalized values from gl_FragCoord. I adjusted for viewport offset in here as well.

normalized(gl_Position.x) = ((gl_FragCoord.x - view[0]) / view[2] * 2) - 1;
normalized(gl_Position.y) = ((gl_FragCoord.y - view[1]) / view[3] * 2) - 1;
normalized(gl_Position.z) = (gl_FragCoord.z * 2) - 1;
2(gl_FragCoord.x - view.x)*view.width - 1

To be continued…

This is as far as we can take it without a little extra information. Luckily, we have just that. If we have the projection matrix, or at least know a few values out of the matrix we can get our exact coords. We’ll need to find ‘w’ to convert from our normalized coords to our clip space coords.

consider the standard project matrix as such:
(recall the projection matrix is applied after the modelView matrix)
[xmultiplier, 0, 0, 0]
[0, ymultiplier, 0, 0]
[0,0, zmultiplier, -1]
[0,0, wvalue ,0]
we’ll notate these values as such:
[mX,0,0, 0]
[0,mY,0, 0]
[0,0,mZ,-1]
[0,0,mW, 0]

and when we take our vertex result from the ModelView matrix and apply the projection matrix it’s the same as these operations:
gl_Position.x = vertex.x * mx;
gl_Position.y = vertex.y * my;
gl_Position.w = -gl_Position.z;
(note that w is assigned before the value of z is changed, that’s important, if you change z before setting w, you get a different value)
gl_Position.z = gl_Position.z * mZ + mW;

by looking at this
we see that both z and w values are dependent upon each other, that means we can calculate W or Z if we know Mz and mW which we do!

to go from clip space to window space we multiplied our clipspace variables (gl_Position) by the inverse of W. Likewise, to get back to clipspace we need to multiply by W. Problem is we don’t know W yet, but we do know that

(recall the normal of gl_FragCoord.z = gl_FragCoord.z * 2 - 1

gl_Position.z = norm_Gl_FragCoord.w * gl_Position.W;
gl_Position.w = -(gl_Position.z - mW) / mZ;

so:
-gl_Position.w = ((norm_GL_FragCoord.z * gl_Position.w ) - mW) / mZ;

going to shorten the notations:
gl_Position.w = glW;
norm_GL_FragCoord.z = normZ;

-glW = ((normZ * glW) - mW) / mZ;
work some algebra:
-glW = (normZ * glW / mZ) - (mW / mZ);
glW = (1/(-1-normZ/mZ)) * (-mW / mZ);

now simply take your normalized coords of gl_FragCoord and times them by your W value.
gl_Position.x = normalized(gl_FragCoord.x) * glW;
gl_Position.y = normalized(gl_FragCoord.y) * glW;
gl_Position.z = normalized(gl_FragCoord.z) * glW;

and you have your Clip space positions back!

This is really handy if you want to find where your mouse click is, or if you need to do calculations off where the vertices were before windowSpace. In addition, it doesn’t require any glu functions.

=)

Actually, you can. You specifically have to disable rendering with glEnable(GL_RASTERIZER_DISCARD) when using transform feedback.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.