createing framebufferobject (fbo)

i have a question about creating a fbo.
Do i always have to create a renderbuffer as well ?
I mean i’ve created a framebuffer (glGenFramebuffers()) and attached a texture to it. But in a lot of tutorials they also create a renderbuffer (glGenRendebuffers()) as well and the call

best regards,

This is a good tutorial in my view…

There are two types of framebuffer-attachable images; texture images and renderbuffer images. If an image of a texture object is attached to a framebuffer, OpenGL performs “render to texture”. And if an image of a renderbuffer object is attached to a framebuffer, then OpenGL performs “offscreen rendering”.

By the way, renderbuffer object is a new type of storage object defined in GL_EXT_framebuffer_object extension. It is used as a rendering destination for a single 2D image during rendering process.

In addition, renderbuffer object is newly introduced for offscreen rendering. It allows to render a scene directly to a renderbuffer object, instead of rendering to a texture object. Renderbuffer is simply a data storage object containing a single image of a renderable internal format. It is used to store OpenGL logical buffers that do not have corresponding texture format, such as stencil or depth buffer.

Hmm, thank you. I know this tutorial. Well im a little bit confused. In

they create both. a renderbuffer a framebuffer and a texture. and also in the nvidia example simple_glow, they create a renderbuffer as well. although they dont need in. Well I guess that they dont
need it. som im confused. Lets assume i want to create a multipass shader, so i have to draw the result from the first pass into a texture. So creating a framebuffer and attaching a texture should be enough, right ?


yep (Framebuffer + texture attachment) that is the minimum you usually need. You add a render buffer when you want the FBO to include other Window like attributes - such as a depth buffer or perhaps a stencil buffer. If you are rendering to a 2D screen, then you won’t have the need for a depth buffer and the FBO with colour attachments is all you’ll need. However, as soon as you use FBOs in a 3D scene and enable depth_testing - you are going to need a depth buffer. This is where renderbuffers are needed as they create the storage for your FBO attributes.

Hello, thanks for ur answer. And the last part of your answer is my new questione. I performed a multipass shader (glow).
I tried it with glowing a teapot. To avoid redrawing of the teapo after the shader has been performed I render the teapot in 2 textures. So my fbo has 3 textures. (color attachments) attachment 0 and 1 are for the pingpong and 2 has the original teapot. at the end i draw a full screenquad with my original teapot textured on it and then i blend over the glowed teapot. Well that works fine so far. but i if i have geometry drawn before or after the teapot, i cant see it, because i rendered a fullscreenquad.

for example

drawCube() *
RendertoTexture(true,targettexture 0 and 2);
Shader pass1
Shader pass2
drawOriginalteapot(from target texture 2)
blend on
blend off

So in both case wether the cube is drawn before texture fetch
or after drawing fullscreen quad, i cant see it.
Your hind with the depth buffer makes me think i can save the state of the depthbuffer at the time when i render the reapo into the texture and then when i draw the final result i have to
write the save depth buffer.
Dont know if this works or even if this is the proper way to do.
Im working with Nvidia Cg Shader.

And i can i create a FBO that has a color_attachment and a depth_attachment at the same time ? so that i can call glBindFramebufferEXT and glBindRenderbufferEXT. I think, yes. Am I right ?

greetings ,