I have been trying to figure out precisely how OpenGL interpolates texture coordinates between vertices on the screen when the GL_PERSPECTIVE_CORRECTION_HINT is set to GL_NICEST. I am particularly interested in knowing what happens to all four texture coords (s,t,r AND q) since I want to use the q coordinate to implement a secondary projective transformation.

Any info would be greatly appreciated since I’ve checked all the usual sources (Red Book, Blue Book, Angel’s book) with no luck.

OpenGL interpolates texture coordinates linearly in eye space for all coordinates,
then the looks up (s/q) for 1D textures, (s/q, t/q) for 2D textures, and (s/q, t/q, r/q) for 3D textures.

Well, not linearly in eye coordinates, exactly… s, t, r, and q are interpolated linearly in window coordinates.

There is some trickery in the OpenGL spec to account for how this can be done with only one divide per pixel, but this is more of a HW-level optimization that should not affect applications.

I’m still a bit puzzled. If you did interploate linearly in window coordinates then in the usual case where the q coordinate has a constant value of 1 linear interpolation would just give you a q value of 1 over the entire window which would mean that you were effectively just doing linear interpolation on the s and t values which would not correct for perspective effects.

If we are doing linear interpolation in eye coordinates are we doing linear interpolation of (s, t, r and q) as a function of (x, y and z) or (x, y, z and w) ?

Secondly I was wondering whether quads and triangles were handled differently from the point of view of texture coordinate generation.