Multisampled Depth Renderbuffer

2.5 is GPL2 or later. 2.4 is under a “Modified BSD License”.
http://www.antigrain.com/license/index.html

In exactly what post i did start complaining?

And “Then use D3D” is not the answer i want to hear in a opengl forum, when asking for help.
Plz then don’t answer at all, and don’t waste your, mine, others time.

I only quoted a D3D reference to show you,
that there are techniques readily available to have deferred shading with multisampling.

I said the average non-linear depth may be useful for SOME applications and not ALL.
Sure for lighting it will be ugly.
But for computing a depthbased fog/scattering it may be useful,
because non-linear depth shares some properties with a fog blend factor based on distance to eye,
like a concentration of values near 1 for far distances.
or like that its monotonic (lighting is not).
so the average of the non-linear depth and then the mapping to a fog blend factor may be an acceptable
approximation to the real desired value, which is normally
computed by a per-sample transformation and then taking the average.

By the way, you can draw antialiased lines without multisampling. Just draw 2px-wide lines with a shader and alphablending. The shader puts gl_FragColor.a = distance from pixel to line. Looks fine, and even better when lines are 4px wide.
But this completely skips writing to the depth-buffer, so it might not be helpful to you.

I’m sorry, but I asked also for a multisampled depth buffer. Are you trying to reinforce my point? My point of posting is that this discrepancy between treatment of buffers seems arbitrary. Certainly, averaging depth (which is linear for orthographic projections) isn’t always what is wanted. But who’s to say that such functionality is never useful? The GL_EXT_framebuffer_multisample extension makes no mention that depth attachments are treated differently.

You’re making an aesthetic argument here for a narrow (though primary) application of the API. OpenGL is not just used for rendering realistic 3-D scenes, so your assumption that averaging color is correct while averaging depth is incorrect may be appropriate for your applications but it is not some gold standard.

Multisampling is effectively an image space operation, so trying to assign some higher level semantic meaning is not always appropriate. Essentially, we are performing a weighted-blending of a buffer’s values. I can filter the depth buffer on the CPU pretty easily, but I was hoping to antialias on the GPU.

In response to your statement that you cannot see any use for a multisampled depth buffer, it seems like this would be exactly what you’d want to slightly emboss primitives into the background.

  • Chris

Ah, then why not convert depth to color (gl_FragData[1].x=…), and do MRT? You’ll get the results you want, I think.

Good idea. I’ll try it out. Thanks.

  • Chris

[quote=“kaerimasu”]

Good idea. I’ll try it out. Thanks.

  • Chris [/QUOTE]

Keep in mind that if you have a lot of overdraw, you might still want to keep a “regular” depth buffer. This way you can benefit from the early-Z culling. This may or may not make a difference depending on your pixel shader complexity, platform, etc.