Using early depth pass with MSAA

I have implemented a depth pre-pass into my engine, which works well, except if I want to enable MSAA.

What I need:

  • fill the default depth buffer with multisampled pre-pass depths
  • have the (resolved) depth pre-pass data in a texture to use it for SSAO.

What I have tried:

  • depth pre-pass into a multisampled FBO with texture attachement, then glBlitFramebuffer to the default depth buffer: final image have jagged edges
  • depth pre-pass into the default depth buffer directly: works perfectly, but then I can’t manage to blit the depth map from the default depth buffer to another FBO with texture attachement, for later use.

I have found that the multisampled FBO is well multisampled:

  • if I blit it with glBlitFramebuffer for visualization, it does not appear antialiased.
  • but, if I resolve it with a shader (averaging samples values), it appears antialiased.

So, the problem seems to lie in the copy stage:

  • when I use glBlitFramebuffer to copy the multisampled FBO into a normal one for visualization, it does not resolve the samples
  • when I use glBlitFramebuffer to copy the multisampled FBO to the default depth buffer, it does not copy multisampled data successfully, whereas the result is perfect if I avoid the copy by directly rendering the depth pre-pass into the default depth buffer.

Here is the code for the first example (multisampled FBO with texture attachement, then glBlitFramebuffer).

Depth pre-pass framebuffer creation:

  GLCall(glGenFramebuffers(1, &cameraDepthMapFBO));
  GLCall(glGenTextures(1, &cameraDepthMapTexture));
  GLCall(glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, cameraDepthMapTexture));
  GLCall(glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, ANTI_ALIASING_SAMPLES, GL_DEPTH_COMPONENT,
               SCR_WIDTH, SCR_HEIGHT, GL_TRUE)); 

  GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST));
  GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST));

  GLCall(glBindFramebuffer(GL_FRAMEBUFFER, cameraDepthMapFBO));

  GLCall(glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D_MULTISAMPLE, cameraDepthMapTexture, 0));

  GLCall(glDrawBuffer(GL_NONE));
  GLCall(glReadBuffer(GL_NONE));

  if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
        std::cout << "Early depth test Framebuffer not complete!" << std::endl;

  GLCall(glBindFramebuffer(GL_FRAMEBUFFER, 0));

Here I render the depth pre-pass on the FBO:

  GLCall(glViewport(0, 0, SCR_WIDTH, SCR_HEIGHT));
  GLCall(glBindFramebuffer(GL_FRAMEBUFFER, cameraDepthMapFBO));
  GLCall(glClear(GL_DEPTH_BUFFER_BIT));
  // All draw commands
  GLCall(glBindFramebuffer(GL_FRAMEBUFFER, 0));

Here I copy the early-z data into the default depth buffer, and then render the scene:

  GLCall(glClearColor(0.01f, 0.01f, 0.01f, 1.0f));
  GLCall(glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT));
  GLCall(glViewport(0, 0, SCR_WIDTH, SCR_HEIGHT));

  // Copy early-z data into default depth buffer
  GLCall(glBindFramebuffer(GL_READ_FRAMEBUFFER, cameraDepthMapFBO));
  GLCall(glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0));
  GLCall(glBlitFramebuffer(0, 0, SCR_WIDTH, SCR_HEIGHT, 0, 0, SCR_WIDTH, SCR_HEIGHT, GL_DEPTH_BUFFER_BIT, GL_NEAREST));

  glDepthFunc(GL_LEQUAL);

  // All draw commands

Here is the result: jagged edges caused by depth map aliasing.

Any idea would be greatly appreciated :slight_smile:

It’s not clear how multisampling is supposed to work here.

You seem to want to use multisampling in your depth-pre-pass, do your normal rendering with this multisampled depth data, resolve the multisampled depth buffer into a non-multisampled depth buffer, and then perform SSAO on non-multisampled depth data. Is that correct?

If that’s the case… it’s not going to work. Resolving a depth image is… not a good idea. In that the standard doesn’t actually say that it will produce reasonable results. If you do a depth resolve, the depth value will be implementation defined, with the only requirement that it be on the close range of the smallest and largest depth value for each pixel.

And indeed, it’s not even clear what “reasonable results” would be for such a resolve operation.

If a pixel has multiple depth values in it, should the implementation linearly interpolate to find the “right” one? Depth values are usually in a non-linear space relative to world coordinates, so that doesn’t make sense. Indeed, what would any interpolation mean for the depth? If half of the samples have a depth of 0.25, and half have a depth of 0.75, does 0.5 make any kind of sense? That’s just geometrically incorrect with regard to the scene.

Basically, you should never resolve depth. Do your SSAO in multisample space and resolve your color image.

Yes. But the last part (“resolve the multisampled depth buffer into a non-multisampled depth buffer”) is not important. For now I would just like to use the multisampled depth pre-pass successfully, without having jagged edges like on the image. I also need to have the depth map as a texture for later use.

To be more precise:

  • why does the image have jagged edges, despite the depth pre-pass being multisampled, and being copied via glBlitFramebuffer?
  • if I choose to render the depth pre-pass on the default depth buffer directly, is it possible to then blit this default depth buffer into a FBO texture attachment, for later use?

I have managed to resolve the problem by rendering to and offscreen FBO entirely (depth pre-pass and main rendering stage), and then blit the color buffer to the default color buffer.

For some reason, blitting to the default depth buffer never worked properly.
I read that for historical reasons, the default buffer objects are quite limited in OpenGL, this may be part of the explanation.

Anyway, it works perfectly now. I followed your advice @Alfonse_Reinheart and use non multisampled depth data for SSAO. It works great! Thank you for your time.

Questions:

  1. Which GPU, GPU driver, and GPU driver version are you rendering on?
  2. Are you rendering through any emulation layers (e.g. Zink on MoltenVK, ANGLE, mobile GPU SDK, etc.)?
  3. What is your default framebuffer? A window?
  4. What is the sampling rate of your default framebuffer? I’m assuming 1X, not MSAA.

I noticed your Mac window decorations. So let me preface everything with the following. Thus far, Apple has shown no desire for OpenGL nor Vulkan to work well if at all on their platforms. That might have something to do with the results you’re seeing. That said…

Above, you really haven’t presented sufficient evidence to convince the reader that the “jagged edges” in your “COLOR” buffer have anything to do with the number of samples in your “DEPTH” buffer or if/how it’s resolved at some point. Also, you’ve got 2 kinds of “jaggies” here.

The first, most distracting jagged edges are those around the edges of plant against the sky with “black halos”. These could just as well have been caused by incorrect draw order, incorrect blending state/function, not using framebuffer sRGB, using a tile-based GPU (which often cheats on MSAA) combined with a mid-frame flush, etc. and could be partly due to the framebuffer clear color you used (assuming you cleared it). My bets are on a bad blending state/function and/or draw order.

Second there’s the blocky artifacts in the shadow-like effect on the ground. But those don’t seem like jagged edges – just blocky artifacts. So which one are you talking about.

Do this test:

  1. Enable MSAA render
  2. Disable all SSAO processing
  3. Render your scene (sky dome, ground/terrain, this plant feature) to the MSAA color+depth FBO
  4. Blit to the MSAA FBO’s COLOR buffer to the 1X default framebuffer (window)
  5. What do you see? Post the resulting COLOR buffer result.

If in-fact your rendering requires a multisample (MSAA) depth buffer, create one and render to it. If you need to reference it as MSAA in rendering elsewhere, make sure you’ve created that MSAA depth buffer as an MSAA depth texture. Then simply bind that MSAA depth texture as a input to a shader program, and read the values from it in the shader (e.g. via sampler2DMS and texelFetch()). I’ve done this before and it works well. It sounds like you have too, as you mention doing a custom resolve with a shader above.

If you really do need the MSAA depths, you certainly don’t want to resolve (downsample) depths. That fails your requirements immediately. And even if some single-sample depth buffer would work as well, with specific choice of downsampled depth values, you still can’t use glBlitFramebuffer(). @Alfonse_Reinheart has already covered this, but here’s the spec language explaining why:

OpenGL 4.6 Spec:

So is it min depth? Max depth? Average depth? Some random depth value between min and max? Sure! Any/all are valid. Now is driver’s choice guaranteed to be what you need? Probably not.

  1. I use a macbook pro 15" 2012, with a GeForce GT 650M, under mac OS Catalina (10.15.7). I use OpenGL 3.3, core profile.
  2. No, I run directly into mac OS.
  3. The default framebuffer is created via glfwCreateWindow.
  4. For this example, I used 2 MSAA samples on the default framebuffer: glfwWindowHint(GLFW_SAMPLES, 2);

The first, most distracting jagged edges are those around the edges of plant against the sky with “black halos”. These could just as well have been caused by incorrect draw order, incorrect blending state/function, not using framebuffer sRGB, using a tile-based GPU (which often cheats on MSAA) combined with a mid-frame flush, etc. and could be partly due to the framebuffer clear color you used (assuming you cleared it). My bets are on a bad blending state/function and/or draw order.

You’re right that I didn’t gave sufficient informations to confirm what I said. I can confirm however that the jagged edges around the plants against the sky were due to the MSAA, for two reasons: firstly, it these jagged edges disappear if I set the MSAA samples to 0 (I think that, in this case, the MSAA depth buffer aliasing matches the color buffer aliasing, so it’s fine); secondly, the problem also disappears, no matter the MSAA level, if I don’t apply early-z pass.

For reference, here is the same image than the one I posted before, but with MSAA samples set to 0:

Second there’s the blocky artifacts in the shadow-like effect on the ground. But those don’t seem like jagged edges – just blocky artifacts. So which one are you talking about.

These ones are just due to insufficient shadow resolution / excessive shadow draw distance on this scene.

Do this test:

Enable MSAA render
Disable all SSAO processing
Render your scene (sky dome, ground/terrain, this plant feature) to the MSAA color+depth FBO
Blit to the MSAA FBO’s COLOR buffer to the 1X default framebuffer (window)
What do you see? Post the resulting COLOR buffer result.

In this test the problem is solved. The element that solves the problem is the fact that I don’t blit the FBO depth buffer, but directly render into the FBO color buffer. Blitting the color buffer at the end causes no problem. I tested with and without MSAA on the default buffer, and it’s the same.

I had to split in two posts to be able to post the second image.

Here is the result of the test you suggested (I let all systems since the problem is resolved, letting a poor shadow res so you can see with the same parameters):

Thank you for your interesting explanations! You were both very helpful.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.