Mapping texels to pixels

I have asked a similar questions earlier this year and got it working somehow. But obviously, I didn´t (and still do not) understand what I was doing. First, what am I trying to do: calculating the average scene-luminance by downscaling a texture down to 1x1 while applying a box-filter. This is done using two pbuffers in a ping-pong manner, i.e. render full texture A to quarter texture B, then render quarter texture B to 1/16 texture A until I render just one pixel, then containing the average luminance of the scene. I´m trying to exploit the bilinear filter, so averaging 4 neighboured texels is just using 1 texture lookup.
What I obviously don´t get right is how to exactly match rendered pixels with texels fetched.
Lets assume, we are in the last step of the process, we want to average the lower left 4 pixels of texture A to the lower left pixel/texel of texture B.
I was trying to achieve this using these coordinates:

glTexCoord2d(xtexel, ytexel);	glVertex2d(0, 0);
glTexCoord2d(xtexel, ytexel); glVertex2d(1.0/256.0, 0);
glTexCoord2d(xtexel, ytexel); glVertex2d(1.0/256.0, 1.0/256.0);
glTexCoord2d(xtexel, ytexel); glVertex2d(0, 1.0/256.0);

xtexel and ytexel are 1.0/texturewidth and 1.0/textureheight. The destination coordinate
The projection matrix was setup with this call:
glOrtho(0.0,1.0,0.0,1.0,-1.0,1.0);
The texture sampling should occur central between the 4 lower left texels of the source texture (at the top right corner of the lower left texel), the rendered quad should only consist of 1 pixel.
The problem is, it doesn´t work right.
There´s a document “Mapping texels to pixels in D3D”, but I didn´t find a similar one for GL.
So, how is it done the right way in GL?

thx in advance

First, what am I trying to do: calculating the average scene-luminance by downscaling a texture down to 1x1 while applying a box-filter. This is done using two pbuffers in a ping-pong manner, i.e. render full texture A to quarter texture B, then render quarter texture B to 1/16 texture A until I render just one pixel, then containing the average luminance of the scene. I´m trying to exploit the bilinear filter, so averaging 4 neighboured texels is just using 1 texture lookup.
not an answer to your question but have u tried automatic mipmap generation and grabbing the smallest mipmap (perhaps quicker), also with the pingpong, why dont u skip the intermediatery stages and just draw the texture 1x1 (not to sure how acurate its gonna be for the average luminance though, prolly not very)

The xtexel in your code should be 0.5/texturewidth instead of
1.0/texturewidth. The same with the ytexel.

0 - the start of first texel
0.5/width - center of first texel
1/width - end of first/start of second texel

1-1/width - start of last texel
1-0.5/width - center of last texel
1 - end of last texel.

Similarily with the pizels in the framebuffer:
-1 - start of first pixel
-1+0.25/width - center of first pixel
-1+0.5/width - end of first pixel/start of second

1-0.5/width - start of last pixel
1-0.25/width - center of last pixel
1 - end of last pixel

With both MODELVIEW & PROJECTION identities, you map a texture to the framebuffer with simply
glTextCoord2d(0, 0) -> glVertex2d(-1, -1)
glTextCoord2d(0, 1) -> glVertex2d(-1, 1)
glTextCoord2d(1, 0) -> glVertex2d(1, -1)
glTextCoord2d(1, 1) -> glVertex2d(1, 1)
and if the width & height of the framebuffer are exactly the width & height of the texture, this way you get one-to-one mapping of the texels to the pixels.

It is pretty simple and intuitive. There is no need of special document to describe it. In contrast, in d3d it is anti-intuitive, hence the necessity of spedial document describing it.

ops, sorry your code is right - xtexel should be 1/width (and so the ytexel) since you want between the first & second texel.
If it doesn’t appear to work, probably the problem is somewhere else. The glOrtho projection looks right too.

I think, all this 0.5 pixels/texels/whatever is rather confusing…

I didn´t know that pixels are sampled at their centre, too.
If I understand your explanation right, the first pixel is not sampled at 0.0, but 0.5/width (frustum of 0…1 assumed).
So, if I tell GL:

glTexCoord2d(0, 0); glVertex2d(0, 0);

it will actually draw a pixel at coordinate (0.5/width, 0.5/height) that has a texture coordinate of (1/2 xtexel, 1/2 ytexel). This isn´t obvious!

So, if I wanted the first pixel drawn to map to texture coordinate (xtexel, ytexel), I actually would have to specify:
glTexCoord2d(0.5texel, 0.5texel); glVertex2d(0, 0);

alternatively I could offset the polygon border a bit to achieve the same effect:
glTexCoord2d(0.0, 0.0); glVertex2d(0.5/width, 0.5/height);

Is that right? Can you clarify things up? In my opinion, there is really need for a document describing all that peculiar behaviour in detail. The GL specs certainly describe it, but I need pictures to get it into my head :wink: Especially when using GL to do image processing, its very important to map texels exactly to pixels. I´ll have to check much of my code for similar error sources :slight_smile:

In your example you seem to use a frustum going from -1 to 1. You said, the lower left pixel would be at

-1+0.25/width - center of first pixel

I think, you made a mistake here, I´d say it maps to:

-1+ 0.5*2/width == -1 + 1.0/width

(width => width of screen in pixels)

Zed:
Using automatic mipmapping sounds like a nice idea. But do cards support that for fp textures, too?

the examples I gave were for both PROJECTION & MODELVIEW identity so the coords vary from -1 to 1.
In your case (glOrtho(0,1,0,1,-1,1)) their range is 0 to 1

here is some simple picture assuming frame buffer width & height of 2 and projection set your way:

 first pixel         second pixel

±------------------±------------------+
|0,0 0.25,0 |0.5,0 0.75,0 |1,0
| | |
|0,0.25 0.25,0.25 |0.5,0.25 0.75,0.25 |1,0.25 | | |
±------------------±------------------+
|0,0.5 0.25,0.5 |0.5,0.5 0.75,0.5 |1,0.5
| | |
|0,0.75 0.25,0.75 |0.5,0.75 0.75,0.75 |1,0.75
| | |
±------------------±------------------+
0,1 0.25,1 0.5,1 0.75,1 1,1

center of upper left pixel is (0.25,0.25)
center of upper right pixel is (0.75,0.25)
center of lower left pixel is (0.25,0.75)
center of lower right pixel is (0.75,0.75)

note that the polygon rasterizing rule is “an fragment belongs to the polygon iff it’s center lies inside of the polygon” (with other clauses further clarifying the cases when the center lies exactly on the border) i.e. when you draw infinitely thin lines between the vertices, if the fragment (or pixel) center lies inside the enclosed area, it belongs to the polygon.
So when you wish to draw a quad containing exactly the upper left pixel in the upper diagram, it’s ideal coords must be (0,0)-(0,0.5)-(0.5,0.5)-(0.5,0) (but any other containing only the center of that pixel will do).

Did I manage to make the things clearer now?

please copy-paste the pseudographic “picture” of my prev post in some text editor with even-widthed characters.

Btw the automatic mipmap generation is indeed the right way to go for your case (if it’s supported in your opengl implementation). It also might be simpler and faster to just glReadPixels in your memoty buffer and find the mean color by yourself (add all the pixels together and divide by their number).

it seems that copy-pasting in even-character editor don’t do the job.

I would appeal to the administrators of the site, if possible, to allow the people to use pseudo-graphics in their posts for tables/diagrams/etc. because this is a technical forum and such possibility would be useful.

Try putting your diagram in a code block.

-SirKnight

Thanks, SirKnight

+-------------------+-------------------+
|0,0     0.25,0     |0.5,0    0.75,0    |1,0
|                   |                   |
|0,0.25  0.25,0.25  |0.5,0.25 0.75,0.25 |1,0.25
|                   |                   |
+-------------------+-------------------+
|0,0.5   0.25,0.5   |0.5,0.5  0.75,0.5  |1,0.5
|                   |                   |
|0,0.75  0.25,0.75  |0.5,0.75 0.75,0.75 |1,0.75
|                   |                   |
+-------------------+-------------------+
0,1      0.25,1      0.5,1    0.75,1     1,1

glTexCoord2d(0, 0); glVertex2d(0, 0);

it will actually draw a pixel at coordinate (0.5/width, 0.5/height) that has a texture coordinate of (1/2 xtexel, 1/2 ytexel). This isn´t obvious!

That’s not true. Calling just those calls does not render a pixel at all – you need more vertices to generate a pixel.

If you begin QUADS, and you make these calls, you will render a single pixel, which will cover exactly one texel, assuming the screen is set to 1/pixel and your texture is tWidth and tHeight in size:

glTexCoord2f( 0, 0 ); glVertex2f( 0, 0 );
glTexCoord2f( 1/tWidth, 0 ); glVertex2f( 1, 0 );
glTexCoord2f( 1/tWidth, 1/tHeight ); glVertex2f( 1, 1 );
glTexCoord2f( 0, 1/tHeight ); glVertex2f( 0, 1 );

The trick to remember is that pixels are sampled at the CENTER, but integer pixel coordinates are on the edge between pixels. Because texture coordinates are interpolated, the texture coordinate 0,0 gets interpolated to 0.5/tWidth, 0.5/tHeight at the point where the texture actually gets sampled. This means that the math Just Works, and there’s no need to offset by half pixels, quarter pixels, or any other amount when matching texels to pixels – unless there are driver bugs.

jwatte:

Of course, those two commands don´t render a point. I am using quads. I only used one line of code for simplicity. Just let us assume that the given point is the lower left corner of a fullscreen quad.

Because texture coordinates are interpolated, the texture coordinate 0,0 gets interpolated to 0.5/tWidth, 0.5/tHeight at the point where the texture actually gets sampled.
This is what I was trying to say. Now, if you look at my initial problem, I´m not trying to directly hit the lower left texel when the first lower left pixel gets drawn. Instead I want the lower left pixel to sample at (1/texturwidth, 1/textureheight). I want to directly hit the centre of the first 4 texels. The rightmost, uppermost pixel of the screen should sample at the centre of the rightmost uppermost 4 texels of the texture. How would a quad with this texture sampling behaviour look like?

The question is what the ratio between the two quantities (screen and texture) are.

If the ratio is 2:1, i e, there’s a 2x2 texel block for each screen pixel, then the correct thing to do is still to set both texcoord and pixelcoord to 0. This is because the amount of texture coordinate delta for one pixel is (width in pixels)/(texture coordinate width), and the pixel is sampled in the center of that.

Thus, the actual point sampled is dependent BOTH on how many pixels wide your target quad is, AND how many texels wide your texture coordinate source is.

If the source texture is as wide as the target pixel rectangle, but you want to sample between texels, then you add 0.5/tWidth and add 0.5/tHeight to texture coordinates. If the texture is 2x as wide as the pixel rectangle, you add nothing. If there’s some other ratio, you can’t actually make EACH sample line up between texture texels, but you can make the beginning and end points hit those specific points.

In general, in a single dimension (say, the “s” texture coordinate dimension), given a pixel width count if “wP”, a beginning texture coordinate of “s0” and an ending texture coordinate of “s1” then the texture will be sampled at positions for t = 0 … (wP-1) of (t+0.5)/wP*(s1-s0)+s0. If you have a specific “t” (such as 0 and wP-1 for your edge cases) then you can just plug those values in, and solve for s0 and s1 as appropriate. If you want to hit specific texels, rather than specific texture coordinates, then just use the fact that each texel is 1/wT (texture width) in size, with the left edge of the first texel being at position 0.