FBO with default (window) depth buffer?

I come from DX9 land where it was possible to create render targets to render to while using the depth buffer that was originally created for the rendering window.

From what I’ve been able to find on the net, it does not sound like this is possible in OpenGL. Is that true?

If for example I wanted to do some deferred rendering and I needed depth buffering, I’d need to create a new depth buffer to attach to the FBO rather than use the depth buffer that’s associated with the window?

Yes it’s true, but then again if you only render the post processing stage to the backbuffer you don’t need to have a depthbuffer associated with the window.

That is true.

I was just looking for clarification to make sure I wasn’t being wasteful by allocating up a second depth buffer.

Unfortunately, with a deferred setup an intact depth buffer is required for objects that may need to be lit in a forward fashion (i.e. partially transparent objects, particles, etc.) while interacting correctly with the 3D scene as far as depth is concerned.

I guess another alternative would be to create a 24 bpp back buffer associated with the window with no depth buffer. Then create the standard 32 bpp “final” offscreen frame buffer in addition to the deferred g-buffers, along with a corresponding depth buffer. Seems like a lot of extra work though, just to save 8 bits * 2 (24 bpp instead of 32 bpp) for the double buffered window back buffer.

Here is a pdf explaining how this is done in killzone2

www.guerrilla-games.com/publications/dr_kz2_rsx_dev07.pdf

Thanks for the link (I’ve seen the paper before, it’s definitely a good one). However, my issue is not with trying to understand the concepts behind deferred rendering.

The issue is with the depth buffering and how that’s handled specifically in OpenGL. However, it sounds like it’s pretty cut and dry. If depth buffering is required when using FBOs, it will be necessary to allocate an extra depth buffer. Certainly not ideal, but not necessarily the end of the world with 512MB and even 1GB cards becoming standard.

it will be necessary to allocate an extra depth buffer. Certainly not ideal, but not necessarily the end of the world with 512MB and even 1GB cards becoming standard.

No, it would not.

See, it isn’t an extra depth buffer if it’s the only depth buffer. Nobody said that the main window had to have a depth buffer, after all. Just create a main window pixel format with no depth or stencil bits, and you will have no depth buffer on the main window.

No wasted memory.

Well if you look in that pdf and how they have set up their buffers you would notice the “light accumulation buffer”, it is used to first accumulate all the lights, shadows and so on, then you multiply it with the albedo, still in the same buffer.
But thats not all after the deffered part is done you can start rendering all the forward rendering bits (decals, transparancies, smoke, fx and so on) in the same buffer.

So as Alfonse said, you won’t need another depth buffer.

Thanks for all the tips, suggestions, guidance. I think I’ve got it pretty clearly sorted out in my head now how it needs to flow with only a single depth buffer.