Very multipass rederning

Hello all,

i want to combine images (rendered frames) in this way:
suppose i have 2 images, the combined image = image1 / 2 + image2 / 2
So if u have a completely black and a completely while image, you’ll get a gray image

Now, i want to do this wit a very large number of images (eg 255).

PS: a 256 pass technique may seem rediculous to you, and you might suspect i’m a very stoopid noob, but this is not true, i’m attempting some kind of global illumination - radiosity technique. The eg 256 pass technique will be very slow (maybe 1 frame per sec) but if you compare this time (1 sec per frame) with the time a global illu solver will take to produce the image (minutes - hours), it’s a spectacular improvement.

Normal blending and framebuffer techniques wont work anymore i guess.
Suppose, if you want to render 256 images in an acc buffer with 8 bit precision, then bytes of each image are devided by 256 and acc’ed, making the resulting image black (because any byte / 256 with 8 bit precision = 0)
I’m not very acquianted with opengl acc buffers and blending stuff, but a simple program i wrote says my acc buffer has 16 bit precision, meaning i could combine 256 images withouth precision loss. For 1024 images, more precision will be lost.

I’ve made up a technique with temp buffers
it is as follows:
suppose you want to render 8 images together (these following lines can be skipped)
render 1 in buffer 1
render 2 in buffer 2
combine 1 and 2 in buffer 1
render 3 in buffer 2
render 4 in buffer 3
combine 2 and 3 in buffer 2
combine buffer 1 and buffer 2 in buffer 1
render 5 in buffer 2
render 6 in buffer 3
combine 5 and 6 in buffer 2
render 7 in buffer 3
render 8 in buffer 4
combine 7 and 8 in buffer 3
combine buffer 2 and buffer 3 in buffer 2
combine buffer 1 and buffer 2 in buffer 1
buffer 1 = final image

this technique only uses O(log2n) (with n = number of frames) buffers

so if i would like to combine 1024 images, 10 - 11 buffers would suffice, and this technique will work with eg 8 bit precision buffers

to actually combine 2 images, i could copy the buffer to a texture on a 2d plane the size of my viewport and use reg combiners or blending for example.

For the buffers i could use PBuffers (i work with geforce3)

What should i use ? I mean, is it possible to do in opengl with regulary blending and acc buffers and stuff, or should i use my way ?


You want an accumulation buffer, which should have 16 bits per channel of precision. Unfortunately, accumulation buffers are not currently hardware accelerated.

However, I’m not so sure that what you want is the average of all images. Light is usually additive; if you get light from two sources, both sources contribute to the final light of that pixel in the image. Anyway, even if you do it additive, you’ll still have dynamic range problems with an 8bit frame buffer.

Better wait for next year’s hardware :slight_smile:

Are you sure you do not keep sufficient accuracy when using just the accumulation buffer and blending like this:

1st image: src = 1, dst = 0
2nd image: src = .5, dst = .5
3rd image: src = .33333, dst = .6667

nth image: src = 1/n, dst = 1 - (1/n)

this will result in each image getting the same weight in the end: for example the first image, after three passes has weight 1 * 1/2 * 2/3 = 1/3 (SRC * DST * DST)

(based on discussions in the Red Book about blending multiple images for equal amounts).



Well, the average is as a matter of fact just a sum (of all images), but then a weighted sum (where all frames are weighted by 1/n). So no difference there, the light is still additive. You could see the weighting as a method to handle the large dynamic range.

JML, i was thinking of that also,
now i do it like this:
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
and set alpha everywhere to 1/n, and start with black

Won’t the result be the same in the 2 methods ? Maybe the one you proposed will be more unsensible to limited precision errors.

I think you comment about storing to a texture will be a good idea. This is just a suggestion, but you could make your own accumulation buffer, with the whichever accuracy you choose, 8-bit,16-bit,32-bit,128-bit. So, each frame you render the frame to a pixel buffer, or texture. Grab the data throw it in to you accum buffer, and at the end of everything normalize the buffer and you are done. This would be a very simple solution for you to atleast play around with.

But again just a comment

Neil Witcomb