NPOT Texture Downsampling

Once upon a time… I implemented tonemapping for a HDR renderer. Back then I had a hard time, because I didn´t really consider/understand GL´s polygon/texture sampling behaviour. But eventually I got that renderer working.

Now I wanted to switch to NPOT textures, because it takes less memory and I would have no problems with different aspects (width/height ratio) of the intermediate results. The switch was as easy as just providing NPOT sizes to the pbuffers in use (that really surprised me :slight_smile:

But somehow the results became quite unpredicatable. I figured out that the Get-The-Average-Luminance code produced false luminances (prefering some regions of the screen, ignoring others) - which instantly reminded me of the problems in former days. So I started to fiddle with texture-coordinate offsets again, which was quite a desperate effort since it didn´t get me anywhere.

I now understand that the luminance-averaging-code does not work as expected, since NPOT-textures just can´t be downsampled using a simple box filter (the sizes can´t be repeatedly divided by 2).

So, my question ist: how can I safely downsample the NPOT-luminance texture without counting any pixel double or leaving any pixel out?

Unless I’m misunderstanding either your question or the paper, this is what you want. Clicky . It’s a direct link to a PDF from NVIDIA on mipmap generation for NPOT images.

How did you do it when you were using POT textures? My guess is that the rest of your texture (other than the rendered portion e.g. 640x480) was zeroed out i.e. black? If that’s the case then you will at most need to add just one border row or column to get your old box filter code working again.

Bob: thanks for pointing to that paper. I remember it, but I didn´t thought it could help me. I´ll read it. The effect I try to achieve is similar to mipmap generation, but also a bit different, I guess.
This means, in my case the intermediate results (i.e. the mip-levels) may be wrong from an image reconstruction point of view. I´m happy with any method as long as I get the average pixel value as a result :slight_smile: That´s why I thought there might be an easier way than mipmap generation.

Zulfiqar Malik: To my shame I wasn´t handling this case. I just used the full texture (and got some stretched image this way).

Update: I have solved the problem.

Update2: no, I didn´t.

Using the method I described I sampled the correct points, but in the end the math is not working out:

Imagine a 3x1 texture, containing texels A|B|C .
Calculating the average here should lead to (A+B+C)/3. But the method I described eventually would lead to (A/2) + (B+C)/4.

Back to scratchpad. Sorry for my monolgoue… I´m now interested in a solution more than ever :-/