Originally posted by V-man:
[b]I have seen the MESA source code, but never been able to make heads or tails of it. There isn’t much comments in the code to help.
I would like to do GL_LINEAR and GL_NEAREST for now. How does gl select the 4 pixels for doing miniication filtering? Then there is perspective correcting for texture. Where does the q come from?
Interpolating the uv across the polygon is not really different than gouraud shading I’m guessing.
Bilinear magnification (GL_LINEAR) is very easy to implement. When you’re interpolating your uv’s (yup, just like interpolating your gouraud colour), you should do them in float or fixed point. (If you’re doing it just to learn and dont care about performance, do it in floats). Your uv’s will have fractions, so you do a linear interpolation between 4 texels :
A = texMap[ round_down(u) + round_down(v)*texWidth]
B = texMap[ round_up(u) + round_down(v)*texWidth]
C = texMap[ round_down(u) + round_up(v)*texWidth]
D = texMap[ round_up(u) + round_up(v)*texWidth]
This gives you the 4 texels
Now you interpolate :
final_pixel = filtered_texel = (frac(u)*A + (1-frac(u)) * B) + (frac(v)*C + (1-frac(v)*D)
That’s it! Bilinear filtered texel. You can now multiply (modulate) this pixel with your interpolated vertex colour and you have gouraud shaded texture mapping with bilinear filtering.
To do perspective correction you need to also interpolate q. q is 1/w, you get this # when you’re doing your perspective divide. Then you interpolate s,t,q and at each step you calculate (s/q , t/q) and use that for your texture lookup. Doing a divide per pixel is costly, so the old renderers would interpolate (s/q,t/q,q) and then every N pixel they would fix up s & q.
I’d do it in the following steps :
- get flat shading working
- get gouraud working
- get texture mapping working (non-perspective) + gouraud
- add specular to it (just another parameter to interpolate)
- get perspective correction working
- get bilinear filtering and mipmapping working
Once you have a basic scanline renderer in software, it’s very trivial to add another parameter to interpolate. Hell, you could interpolate the vertex normal and do per pixel phong