Smooth/noperspective for Dummies?

I’m gonna show my ignorance here, but some reading in ShaderX7 puzzles me. I intuitively get what smooth/noperspective do, but I’m not crystal clear on:

  1. [li]when you would use noperspective, and[*]exactly how smooth works.

When do you want noperspective?

Ok, now this is what prompted my questions. In ShaderX7 there’s a blurb that says that passing a 4D world-space position to the fragment shader in a perspective-correct (smooth) varying results in incorrect interpolation. But they don’t completely state why or what you should do about it.

Is this one situation where you’d want noperspective?

Intuitively this doesn’t feel right. A world-space position has W = 1. So what do you need a 4D world space position anyway? XYZ is sufficient. And why wouldn’t you do perspective-correct interpolation for that vec3 position?

(in case anyone has ShaderX7 handy, see pgs. 317-321)

Now as to the “how smooth/noperspective” options work, please check me on this:

smooth vs. noperspective - the jist of it: The intuition seems clear. If you interpolate linearly on the screen (noperspective), you miss the depth foreshortening effect (which you gain with smooth, aka perspective-correct interpolation). For 3D interpolation of a value with a perspective projection, you need bigger steps per pixel closer to the eye. Littler steps per pixel farther from the eye.

So how do they work? From the spec, I gather: noperspective is intuitively:

 lerp( a.x, b.x )

when interpolating across the 2D screen, where a and b are the values at two vertices (call them vertex A and vertex B). Similarly for .y, .z, .w. And smooth (perspective-correct) would be:

 lerp( a.x/pos_a.w, b.x/pos_b.w )
 --------------------------------
 lerp(   1/pos_a.w,   1/pos_b.w )

and similarly for .y, .z, .w. Note here pos_a.w is gl_Position.w for vertex A, and pos_b.w is gl_Position.w for vertex B. (Is this right?)

So to make the latter (smooth) concrete, then I’m assuming for a vec4 varying, the pipeline must:

  1. take the values of that varying for each triangle vertex (a, b, c),
  2. compute x/posw, y/posw, z/posw, w/posw along with 1/posw for each,
  3. linearly interpolate these 5 values across the triangle, and then
  4. per fragment, divide by the 1/posw again to reveal the perspective correct interpolated x, y, z, w values?

Is this correct?

And this perspective-correct interpolation is totally separate and distinct from projective texturing. That is, doing texcoord.xyz / texcoord.w in the shader (or doing it implicitly via tex2Dproj or similar). Reason I ask is I remember in the pre-shader days having to make sure you passed texcoord.w = 1 or “bad things” would happen (presumably automatic projective texturing). And I want to make sure that’s not mixed up in perspective-correct texturing.

I think you’re on the right track. (I was coding rasterizers before). [my understanding on the matter is not flawless, though]

You use the “noperspective” only for stuff, that are relative to the 2D screen grid. I.e if you want to remove the “a.xy/position.w” from the frag-shader, and move it to the vtx-shader.

That seems about correct - I remember back in the “ye olde” days (Geforce2?) getting non perspective projection by doing something like the following (was on hardware that had vertex shaders, but no pixel shaders)

vec4 pos = “modelviewprojection * vertexPos”
vec3 result = “result to be linear interpolated”
vec4 outCoord = vec4(result * pos.w, pos.w);

Then in the fixed function hardware, it would do a divide by texture coordinate w, which would undo the perspective projection.
(The above may not be correct, as I am taking it from memory)

As to your question from ShaderX7 - I had a look at the article, and I don’t think no-perspective projection would help in this case. (would probably get really wrong results - I think)
As to why doing the world space position interpolation causes artefacts - I am not sure. I personally would have thought it would have worked - and I probably do the same mistake in lots of the shaders I write if that is the case.

For other readers - the article is comparing interpolating the world position and using a matrix multiply to get to “texture space position” in the pixel shader.
This is compared against getting the “texture space position” in the vertex shader and interpolating that. (via the same matrix in the vertex shader)

Ok, thanks guys. I appreciate it.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.