Last month I’ve been playing around a bit with framebuffer objects, and I think they’re very nice, but I can’t seem to find a way to render a depth attachment.
With rendering the attachment I do not mean: rendering it to a quad. That’s easy What I do mean is that I would like to do the same as for the color buffer, but for depth only. So instead of copying the color attachment to the main framebuffer by using a textured quad, I would very much like to copy the depth attachment of an FBO to the main depth buffer.
I figured this should be possible by using glDrawPixels, but I’d rather be looking for another way because that would be far too slow for (I can’t use PBO to accelerate it).
Thanks again! It seems I’m out of luck, there are no recent drivers for this card. Too bad.
Anyway, I took a fresh dive into the shader stuff and it isn’t really scary anymore. I do wonder why I can’t use gl_MultiTexCoord0 when glTexCoord is used for submitting texture coords, but I guess I should be using multitexturing anyway
(I’m not even going to ask it’s pretty obvious that the color and depth attachment from the fbo should be bound to 2 texture units so there’s no need for it)
Thanks, I’ll bookmark that link, my next videocard (or laptop with vidcard) is going to be nVidia again anyway
Yeah, I’m passing along gl_MultiTexCoord0 to gl_TexCoord, which in turn is used as an argument (in a vec2) for texture2D in the fragment program, and assigned to gl_FragColor.
Heh, I was lucky At first I was using QGLFramebufferObject from Qt4.4, but from looking at the source of Qt I found that QGLFramebufferObject added a renderbuffer when I requested to add a depth buffer to it, while some examples I saw on the web added textures instead. It wasn’t hard to figure out the difference when I noticed that
I’m currently reading the specs as well. Long read, but I’m afraid that’s the only way to fully learn this stuff.
(and all that just to get buffered regions for my 3d views… pfew (because I can’t use buffer region extensions or PBO-accelerated drawpixels) )
By the way, perhaps I should read the specs better, but there’s one more thing I can’t really figure out yet: why is there a need for the separation between renderbuffers and textures? Why doesn’t OpenGL use just the textures only? I mean, fbo’s aren’t even supported in fragment programs, and the image data has to be stored somewhere anyway, right?
(you may slap me around a bit with a large trout when I’ve missed something and someone has found a way to save textures without actually writing out the data to some place and doesn’t depend on dynamic generation of that data)
A renderbuffer is useful for creating a buffers with types which have no corresponding texture format such as a stencil buffer (e.g. STENCIL_INDEX8_EXT).
It’s also useful for creating multisample buffers with glRenderbufferStorageMultisampleEXT.
renderbuffers should always be used (unless u need the output later as a texture then u have no option but to attach a texture to a FBO)
A/ can be faster/use less memory perhaps
B/ they also support AA (textures dont)
As for my problem, I’ve finally found a way to fix most issues with my card and driver: DH Mobility Modder. That allowed me to use the most recent (desktop) drivers.
I can now use PBO’s, VBO’s (finally!) and rectangle textures. The framebuffer blit extension is not listed in the extension string, but the entry point for glBlitFramebufferEXT does seem to exist. It doesn’t work however, don’t know what’s up with that.
Setting values for gl_FragDepth does work in RenderMonkey now (and in addition, shaders are now much much faster), but it does not work in my program yet. I’m probably doing something wrong, but I can’t figure out what it is.
Anyway, I think I’ll try the PBO accelarated drawpixels-approach for reading/writing the depth buffer from the FBO (then I can also switch back to renderbuffers, right?). Should work now, I hope.