I badly try to understand perspective correct texturing and projective texturing which is used for shadow maps.

Ok, so far as I know…:

on the vertex stage:
calculate s/w, t/w, r/w, q/w (w is viewspace w and strq is the texture coordinate)
…
rasterization
interpolate those values, calculated in the vertex stage, liniarly over the polygon.
…
on the pixel stage:
calculate
s/q = (s/w) / (q/w)
t/q = (t/w) / (q/w)
r/q = (r/w) / (q/w)
Ok, after this division, w is not anymore inside that fraction. So why are then the values in the vertex stage divided by w? I just could interpolate s,t,r,q over the polygon and in the pixel stage I also would get s/q, t/q and r/q.

I know, by implementing shadow maps with Opengl I do not have explicitely to understand that, but I really would like to understand the concept behind it. I read the paper “Fast Shadows and Lighting Effects Using Texture Mapping” from Segal et al. And I also read the shadowmap powerpoint presentation from nvidia. But I still have problems with it.

Therefore, any help on this is greatly appreciated.

But it is done in that shadowmap presentation. The texture coordinate is divided by w and then interpolated over the polygon. I don’t understand this division, since in the pixel stage we get rid of that w anyway.

Originally posted by A027298: But it is done in that shadowmap presentation. The texture coordinate is divided by w and then interpolated over the polygon. I don’t understand this division, since in the pixel stage we get rid of that w anyway.
If that presentation is publically available, I’d fancy a link

My guess is that they use w and q interchangeably, in some ways. I.e. they use per-vertex w as per-vertex q, and then adjust the s,t,r texcoords for that q.

If you just divide s,t,r by some value, without using that same value as q, you’ll end up shrinking the texture.
Eg if you want to hit the (1;1) texel at a particular vertex

“Good”
Texcoords:=(1;1;0;1)
or
Texcoords:=(5;5;0;5)
or
Texcoords:=(1/2;1/2;0;1/2)

These will all resolve to (1;1) after division by q.

Those slides describe the mechanics of perspective correction and why projection is “almost free”.

OpenGL does all the per-vertex work for dividing by w.

Note that interpolate(A)/interpolate(B) is not
the same thing as interpolate(A/B). Imagine what happens as B crosses 0.

Paul Heckebert, Henry Moreton, and Jim Blinn have written papers describing perspective correction in more detail. Paul and Henry call it “rational linear interpolation” and it is definitely different from “linear interpolation”.

You should be able to google for references to either of those papers.