Any reason left to use renderBuffers that textures do not cover?

Well the question is simple. Is there anything I can only do with renderBuffers and not textures I’m not aware of?

renderBuffers do not even support layered rendering. So it looks like they got half-drooped along the way already.

I like to make sure that I did not miss anything before throwing them out my library.

Renderbuffers are useful to the extent that an implementation might use different memory storage or layouts or whatever if it knows up-front that you cannot read from them as textures or that you definitely will use them as render targets.

For example, if we use Vulkan as a guide, NVIDIA seems to have special memory allocation schemes for images that are meant to be framebuffers than for other images. It can tell which is which because of your usage parameters, so you have the ability to pick which memory type to use based on what the implementation tells you are the valid memory types.

How truly important any of that is is questionable, since OpenGL implementations have had to deal with having incomplete information, so they’re already used to shuffling images around in memory based on how you use them. My point is that if there is some benefit (beyond making it clear that a particular image won’t be read from), that’s the kind of benefit it would be.

Good question. I’m also interested in this.

I guess the core question is whether any GL/GLES drivers offer specific advantages (performance, available formats, etc.) with use of renderbuffers vs. textures.

On the available formats piece of this, the OpenGL 4.6 Spec in Table 8.19 and Table 8.21 doesn’t indicate any required renderable internal formats which are available via renderbuffers and not textures. So it comes down to extension defined behavior and formats. One extension oddity even provides vendor-specific support for sampling from renderbuffers in the shader, suggesting that on some drivers at least the underlying rep may not be that different, if different at all.

So on that thread, you might check all the internal formats/properties you care about with glGetInternalFormat to ensure that all are supported for both renderbuffers and textures on the GPUs/drivers you support.

As for the performance side of the question, I’ve not aware of any drivers advertising/demonstrating better perf with renderbuffers over textures in certain use cases. But that doesn’t mean these cases don’t exist. I guess use of renderbuffers over textures does have the advantage of advertising to the driver that you don’t plan to read from the rasterized content directly in a shader (…well, except on drivers that allow you to do this).

https://www.khronos.org/opengl/wiki/GLAPI/glGetInternalformat

A few more tidbits related to this question…


Nowadays, you can probably get this same with either renderbuffers or textures just by calling glClear()/glClearBuffer() at the beginning and glInvalidateFramebuffer() / glDiscardFramebufferEXT() at the end (to prevent initial read-in and final flush/write-out). Then its just mem consumption for the dummy render target that’s a factor, and whether either RBs or textures will skip the underlying storage allocation on that GLES driver.


PowerVR Supported Extensions - OpenGL ES and EGL : GL_OES_framebuffer_object:

More indication that RB depth/stencil may avoid the final write-out on some mobile GLES drivers. Again, consider use of glInvalidateFramebuffer() / glDiscardFramebufferEXT() to make this explicit so the driver doesn’t have to guess. Then you should be able to get this behavior for texture render targets too.


ARB_copy_image:

No fundamental difference. Unclear whether this means from an interface and/or implementation perspective.

1 Like

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.