The difference of EGL_BUFFER_DESTROYED for each vendor

Hi All,

I’ve found that there are differences for each vendor when using the same OpenGL ES API. The code is gl-renderer, It’s the opensource, weston. (https://github.com/wayland-project/weston/blob/master/libweston/renderer-gl/gl-renderer.c#L884)
If I’m using the ARM’s GPU, there is no trail image on screen. When I’m using the IMG’s GPU, there is trail image on screen. So I request it to IMG (Because ARM’s library doesn’t have trail image issue)

This is answer from IMG,

But for EGL_BUFFER_DESTROYED, the spec. leave each GPU vendor decide if they want to destroy color buffer contents or change by the swapbuffer operation. From our previous debugging experience on ARM, it choose the first one.(destroy color buffer contents which might imply color buffer clear) IMG implement the second one.(changed by the swapbuffer operation which imply we just keep the content on color buffer so we will see the previous content)

I accepted the difference of operation with EGL_BUFFER_DESTROYED, because it’s the spec. But I’m still curious. Leaving it as a vendor implementation in the spec can cause this different behavior in the same code, and I’m not sure how to prevent this. Even the rendering code is open source, making it even more difficult to predict or solve the problem. Could someone please help me with this problem? Could anyone recommend something for me to read more documentation?

Thanks.

I’m not sure I understand what you are asking, there seems to be a lot of implied context that I’m missing. So to recap, here’s my understanding: EGL_BUFFER_DESTROYED is one possible mode for EGL_SWAP_BEHAVIOR, which describes what happens to the contents of the color buffer when calling eglSwapBuffers. This can be queried with eglQuerySurface and the other possible value is EGL_BUFFER_PRESERVED, which guarantees that the color buffer content is preserved.

On the other hand EGL_BUFFER_DESTROYED does not make any guarantees about the buffer contents after the swap (maybe it should have been called EGL_BUFFER_UNDEFINED), so your application is not allowed to make any assumptions about the contents. The implementation is free to leave it alone, overwrite it, overwrite it only if the battery level is low and it is a Tuesday, except if it is the birthday of the engineer implementing this - you get the idea :wink:
From you description it sounds like the IMG implementation could report EGL_BUFFER_PRESERVED, but even if they do preserve the content they are not required to, preserving the content is valid behavior for EGL_BUFFER_DESTROYED as well.
For your application that means if you need the buffer cleared after a swap you need to issue a glClear call. That gives you consistent behavior everywhere. Relying on EGL_BUFFER_DESTROYED meaning the buffer is cleared is not a valid assumption, it just happens to work on some implementations at this time - they can change it at any time if they discover that gives better performance or uses less battery for example.

I would more go for performance reasons than the programmer temper. Filling a buffer has a cost. Except if the user needs the content of the previous image, I believe this is usually safe to use EGL_BUFFER_DESTROYED, since commonly the context will be explicitly cleared, or all pixels will be covered in the upcoming frame.

Right, but what EGL_BUFFER_DESTROYED does not guarantee is that the implementation clears the buffer for you, some implementations might do it always, some under certain circumstances which could even change from frame to frame, and some might never clear. If you reliably want the buffers cleared, you must do so yourself.

That’s what I wanted to mean :slight_smile:

Having talked to an IMG devtech engineer about this specifically, definitely on their GPUs and drivers (and probably on other tile-based mobile GPUs/drivers), it is best for performance to always call glClear() after binding a framebuffer. The glClear() is better than free. It actually boosts your performance.

Failing to call this means that when rasterizing each screen tile, the driver must pull in the previous contents of that tile into the GPU’s on-chip rasterization cache from the framebuffer stored in DRAM over the extremely slow DRAM bus. Using a glClear() after binding the framebuffer completely avoids this full-screen fetch of the previous contents from DRAM. Instead, it just pre-inits the fast on-chip cache with that clear value before rasterization.

Similarly, at the end of rendering content to a framebuffer, be sure to call glInvalidateFramebuffer() (or glDiscardFramebuffer()) on the buffers you rendered to (e.g. DEPTH, STENCIL) that you do not care about anymore. This will prevent the needless full-screen write of these buffers from the on-GPU tile rasterizer cache back to DRAM over the slow DRAM memory bus, saving time and boosting your rendering performance. RELATED: There is also an glInvalidateSubFramebuffer(). But I haven’t heard whether IMG or other vendors actually take advantage of this in their drivers to reduce DRAM writes.

Thanks for all, I understood well that how to control the color buffer.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.