Well, I do often use textures, but I only use the two texture coordinates s and t. What can I use q and r for? And how can I use texture matrices?
Textures are defined by a co-ordinate <s, t, r> in three-space. Some versions of OpenGL ignore the third coordinate and effectively limit the texture space to two-dimensions. (In the same way you can tween texture coordinates between two points on a plane, ie. through an image, you can tween coordinates through three dimensions and start playing with volumetric data. Very trippy.)
The q coordinate is to turn the three-space cartesian coordinate into homogeneous coordinates. Don’t worry if you don’t know about homogenous coordinates; there are plenty of resources around if you’re interested. Suffice to say that the easiest way to turn <s, t, r> into homogenous coordinates is to simple set q=1.0, ie
<s, t, r> -> <s, t, r, 1>
It is somewhat easier to do stuff in homogenous space than cartesian space. Translation, for instance, is simply a matrix multiplication in homogenous coordinates (as compared to a bloody big matrix addition in cartesian space). OpenGL defines a texture matrix to manipulate texture coordinates in the same way the modelview matrix manipulates verticies.
Hope this helps.
Thanks John. So, in the end 4D texture coordinates are transfered back to two coordinates which defines a point on the image I supply. s and t coordinates are devided by q. Right? Well what happens if I supplied another image (using glTexImage2D)? Does OpenGL ignore the previous one and replace it with the new one? Or does it keep it for use later somehow in memory?