Multisample render buffer for post processing in a WebGL2 context

First of all, apologies if this forum is not the appropriate place to discuss WebGL and WebGL2.

I’m using a multisample render buffer for post processing using three.js. I use a multisample renderbuffer for both read and write buffers in the post chain, however, some passes create additional buffers and these are standard, non-multisampled buffers, as all the passes were written before WebGL2 brought multisample buffer support to the web.

I am finding that I get good results here, but only if I use a high number of sample (8 or greater). Using 4 samples looks terrible.

My question is, do I need to use multi sample buffers everywhere? And is there some final step required to convert the multi sample buffer to a normal buffer for drawing to the scene, or is that handled automatically?

After rendering to a multisample FBO, you downsample it to single sample with the equivalent of the OpenGL/GL-ES call glBlitFramebuffer().

According to WebGL Framebuffer Multisampling, that’s gl.blitFramebuffer().

1 Like

After rendering to a multisample FBO, you downsample it to single sample with the equivalent of the OpenGL/GL-ES call glBlitFramebuffer

Thanks, that makes sense. Actually the three.js WebGLRenderer is already (tried to link to the relevant code but I’m not allowed to post links).

I’ll have to work my way through the renderer code and figure out at what point that is happening.

When it comes to multiple post-processing passes, which of these is correct?

  1. Render initially to a multisample FBO, downsample, then perform the rest of the post processing passes using the downsampled data
  2. Use multisample FBOs everywhere in the post, then downsample at the end.
  3. Use a multisample FBO as a post processing AA pass. This pass would take the place of a manual FXAA pass

There isn’t a correct answer here. You do whatever your postprocessing passes need. That said, if your post processing passes are fairly expensive (e.g. fill-wise) and don’t need a MSAA input, it probably makes sense to go ahead and downsample before applying the post processing passes. This because 1X frambuffers typically consume less memory bandwidth than MSAA, at least for pixels that have been split by a geometry edge.

1 Like

if your post processing passes air fairly expensive

We’re talking about WebGL(2) here, it’s safe to assume that everything is expensive.

Thanks for the response, that’s good to know.

Just to clarify, are you saying that it doesn’t matter at what point in the chain AA is applied? Currently, all three.js post processing examples, if they use AA at all, apply it as a FXAA/SMAA/TAA etc. pass somewhere towards the end of the chain. Are we going to get different/incorrect results if we take the same chain of passes and do AA at the start instead (ignoring the obvious differences between FXAA and MSAA)?

Well, I was talking about the general case (with GL), but it sounds like your question is pretty closely dependent on three.js post-processing and WebGL2. So hopefully someone else more familiar with them can chime in here and give you an answer.

Yeah, fair enough. However I do want to know about the general case. The reason I’m asking about this is because the three.js post-processing set up was written before multisample render targets were available.

Now we’re trying to work out if we need to change anything to use multisample render targets, or whether the current approach where we have basically shoehorned them in to the start of the chain is valid.