I must be missing something obvious here. Perhaps you guys can clarify for me.

I am converting some code from using PBuffers to using FBOs for my off-screen rendering.

I have googled several example projects on the net and studied them and I have become very confused as to the purpose of render buffers.

Some FBO examples I have seen don’t use them. They just bind the texture they wish to draw onto to the FBO and get crackin’.

Others attach the texture to a renderbuffer, then attach that to the FBO.

Yet others do do it one way for the color buffer, but the other way for the depth buffer.

So, my questions are: What purpose does the render buffer serve? Why would people choose to use it or not use it?

Renderbuffer is a rendering target that is not a texture. That’s all. It is just additional flexibility thing in the spec so that implementations can have more space. It doesn’t really make sence for current hardware. Ideally, you should use renderbuffers if you don’t need them as textures :slight_smile:

So… A renderbuffer is kind of a place-holder?

For example: In my implementation I have no use for depth-buffering. (my data is pre-sorted for depth, so it draws correctly as it is)

I disable depth-buffering at the begining of the rendering pass: glDisable(GL_DEPTH_TEST);

I only care about the color values resulting from the off-screen rendering.

So, when I use an FBO, I should just attach a texture to the FBO for color and attach a renderbuffer for depth?

The typical use of Renderbuffer object is for offscreen rendering. Also, this buffer can be used to hold an image data with no corresponding texture formats, such as stencil, accum buffer, etc.

I believe you can store depth info into a Renderbuffer object using such as GL_DEPTH_COMPONENT24 token.

If you don’t need depth information you don’t have to attach a depth texture or depth render buffer.

To an FBO you can either attach a texture or a renderbuffer. Normally you attach a texture if the result of rendering is going to be used later. Otherwise you would only attach a renderbuffer.

Some might think there is no difference between using textures and renderbuffers but there is. At least on my old ATI X800 XT depth textures could only have 16bits while a renderbuffer could only have 24bits. So, if you attach a depth texture you get less precision compared to a normal depth renderbuffer.

[ | vector_math (3d math library) | software renderer ]

What purpose does the render buffer serve? Why would people choose to use it or not use it?

OK, let’s say you’re rendering the reflection of a scene for a mirror in the main scene. So, you need to use the color information that you render to in your main scene. Therefore, it needs to be a texture (because you can’t use renderbuffers as textures).

In order for the mirrored rendering to work however, you need a depth buffer. This allows your rendering to use depth tests, etc. But you don’t need to use that information as anything but a spare, functioning depth buffer. That can, and should, be a renderbuffer.

Using a texture as an attachment to an FBO attachment signals your intent to read from that image. To use it as a texture, which is why you bound it that way to begin with. However, if you do not need to use it as a texture, then use a renderbuffer.

Thanks guys! It makes more sense now.