the process of texturing

Is there a page that describes the math and logic behind texturing?

I would like to texture polygons myself, meaning that as I rasterize triangles (scan line), I would like to access texels and apply the color to the pixels.
I think I have an idea of how it’s done, but would like to see documents (detailed) about opengl method.

This is for a kind of project, and I’m not crazy

V-man

It’s pretty basic actually. If you’ve got any scanline renderer working, all you have to do is interpolate the uv coordinates across the polygons.

for each scanline
for each pixel
pixel[i] = texture[u+v*texStride];
u += du;
v += dv;

That’ll give you perspective incorrect, point filtered texturing . For bilinear filtering, you’ll have to take subsamples and for persepctive correct textures you’ll need to interpolate (s/q, t/q, q), instead of linearly inteprolating (u,v).

Take a look at the Mesa sourcecode. It has a software rasterizer.

I have seen the MESA source code, but never been able to make heads or tails of it. There isn’t much comments in the code to help.

I would like to do GL_LINEAR and GL_NEAREST for now. How does gl select the 4 pixels for doing miniication filtering? Then there is perspective correcting for texture. Where does the q come from?

Interpolating the uv across the polygon is not really different than gouraud shading I’m guessing.
V-man

Originally posted by V-man:
[b]I have seen the MESA source code, but never been able to make heads or tails of it. There isn’t much comments in the code to help.

I would like to do GL_LINEAR and GL_NEAREST for now. How does gl select the 4 pixels for doing miniication filtering? Then there is perspective correcting for texture. Where does the q come from?

Interpolating the uv across the polygon is not really different than gouraud shading I’m guessing.
V-man[/b]

Bilinear magnification (GL_LINEAR) is very easy to implement. When you’re interpolating your uv’s (yup, just like interpolating your gouraud colour), you should do them in float or fixed point. (If you’re doing it just to learn and dont care about performance, do it in floats). Your uv’s will have fractions, so you do a linear interpolation between 4 texels :

A = texMap[ round_down(u) + round_down(v)*texWidth]
B = texMap[ round_up(u) + round_down(v)*texWidth]
C = texMap[ round_down(u) + round_up(v)*texWidth]
D = texMap[ round_up(u) + round_up(v)*texWidth]

This gives you the 4 texels
A B
C D

Now you interpolate :

final_pixel = filtered_texel = (frac(u)*A + (1-frac(u)) * B) + (frac(v)*C + (1-frac(v)*D)

That’s it! Bilinear filtered texel. You can now multiply (modulate) this pixel with your interpolated vertex colour and you have gouraud shaded texture mapping with bilinear filtering.

To do perspective correction you need to also interpolate q. q is 1/w, you get this # when you’re doing your perspective divide. Then you interpolate s,t,q and at each step you calculate (s/q , t/q) and use that for your texture lookup. Doing a divide per pixel is costly, so the old renderers would interpolate (s/q,t/q,q) and then every N pixel they would fix up s & q.

I’d do it in the following steps :

  1. get flat shading working
  2. get gouraud working
  3. get texture mapping working (non-perspective) + gouraud
  4. add specular to it (just another parameter to interpolate)
  5. get perspective correction working
  6. get bilinear filtering and mipmapping working

Once you have a basic scanline renderer in software, it’s very trivial to add another parameter to interpolate. Hell, you could interpolate the vertex normal and do per pixel phong

>>>To do perspective correction you need to also interpolate q. q is 1/w, you get this # when you’re doing your perspective divide. Then you interpolate s,t,q and at each step you calculate (s/q , t/q) and use that for your texture lookup. Doing a divide per pixel is costly, so the old renderers would interpolate (s/q,t/q,q) and then every N pixel they would fix up s & q. <<<<

Ok, so vertex gets transformed by P*M and the w of the vertex is q for the tex coord of that vertex, is that it? You said q is 1/w but that doesn’t seem right to me. If q is given by glTexCoord4f, does that mean gl uses your q instead of generating one?

I seem to remember something of the sort for old renderers. Some didn’t do the perspective correct at all and q component was ignored in glTexCoord.
Too bad gl doesn’t have phong shading. It’s just another equation. People worry too much about performance.

V-man

>>Ok, so vertex gets transformed by P*M and the w of the vertex is q for the tex coord of that vertex, is that it? You said q is 1/w but that doesn’t seem right to me. If q is given by glTexCoord4f, does that mean gl uses your q instead of generating one?<<

Yeah q = 1/w, that’s just a name for it. The “q” in the glTexCoord refers to a different parameter (similar to what “w” is in glVertex4).

eyeVertex = (P*M)*v
q = 1/eyeVertex.w

clipVertex.x *= q;
clipVertex.z *= q;
clipVertex.y *= q;
clipVertex.w *= q;

screenPosition.x = clipVertex.xSCREEN_W/2 + SCREEN_W/2;
screenPosition.y = clipVertex.x
SCREEN_H/2 + SCREEN_H/2;
screenPosition.z = clipVertex.z*z_scale + z_offset;

texCoord.x *= q;
texCoord.y *= q;
texCoord.z = q;
texCoord.w = 0.0f; // don’t need this for interpolation of 2D texture coordinates

And yeah if you pass in your q into glTexCoord as the 3rd parameter, then GL will use that instead of generating one. You NEED the q to get perspective correction.

I’m getting the urge to write a sw renderer again, hehe. It’d be so much easier today with the fast processors + 3Dnow/sse2.

Oh and I would recommend Jim Blinn’s excellent book “A trip down the graphics pipeline”. It covers most of this.

SW renderers tend to be unsuccessful, unless you don’t want to do texturing. I don’t know if MESA uses SSE/3DNow, but the binary I had downloaded isn’t fast enough.

Another question that you probably know about.

If you have a point inside a triangle, how can you use the 3 vertices of that triangle to get that point? An affine combination is needed but finding the coefficients is not always posssible.

The equation would look like

constant1 * vertex1 +
constant2 * vertex2 +
constant3 * vertex3 =
the_point_inside_the_triangle

and we need to find those constants.

V-man

I’m in a hurry so I can’t be too thorough. Look up “barycentric coordinates” on google and you’ll find the answer to your question.

Just want to thank you for the help.

Did the outline version of my code, then flat shading then gouraud. I can see that gouraud can benifit plenty from MMX/SSE. Specially the saturation calculation.

Of course my scanline technic might not match the gl one right now, but I really want to try the texturing stuff.

V-man

This question wasn’t really OGL related, so I’ll include the obligitory you ought to look elsewhere, but I want to correct some misinformation about w and q as they apply to OpenGL.

Perspective correction of textures can be doen exactly as the math described above, but referring to 1/w as q in the OGL sense is incorrect. Specifying a q does not replace perspective correction. If the implementation does perspective correction and you supply a q, it will still do perspective correction. q is the coordinate used to project the texture coordinates, just like w is used to project the vertices.

For those that really care the math (using baricentric coords) is all spelled out in the GL spec in the rasterization section. (Look for a big equation in the polygon section) I can’t remember exactly how the math works out for standard scan-line stuff, as I use baricentric coords for my personal stuff.

-Evan