Can I combine gl ES 2 and regular Open GL?

Hello,

I want to use the call glGetTexImage() on an FBO buffer I have going in my Raspberry Pi4b program to get hold of the buffer without the time consuming pixel copy.

But I read that glGetTexImage() is not available for gl ES and that’s what I’m using and seems the most common on rpi.

But Rpi can also use ‘standard’ Open GL up to version 2.1 and it looks like glGetTexImage should work with that.

Can I somehow add or use ‘regular’ OpenGL commands from within gl ES 2.1 ?

Cheers
Fred

Have you tried glReadPixels()?

The usual advice applies. Don’t read it back and expect the result immediately unless you’re fine with a delay. Instead, do the readback to a PBO and fetch the result from there a frame or two later.

Yes I’ve tried that a while ago and it’s just to slow I’m afraid.
And latency + speed is really important to me. I have a hardware mem-2-mem pipeline setup and I want to use glGetTexImage() as I understand that just returns a pointer to the buffer in memory and doesn’t involve any copying.

No, that’s not how it works. Which you can tell just from the API, as it doesn’t return a pointer at all. You give it a pointer to memory, which it fills in (unless you use a buffer object via PBO, which you could do with glReadPixels too).

Yes sorry I used the wrong wording there.

But the docs say,

The semantics of glGetTexImage are then identical to those of glReadPixels, with the exception that no pixel transfer operations are performed

Also I asked about this on the mesa-dev mailing list and they said,

I’d recommend using glGetTexImage or other similar GL APIs for getting
the data out of the GL texture.

While mmap of a dma-buf file descriptor works in theory, direct CPU
reads from GPU accessible memory can be very slow on some platforms.

You’re misunderstanding that. “Pixel transfer operations” means the transformations which are controlled by glPixelTransfer (which doesn’t exist in OpenGL ES). It still requires a copy.

I wouldn’t assume that glGetTexImage will be any faster than glReadPixels in general. In both cases, the main potential bottleneck is synchronisation; the driver must wait for all pending commands to complete before it can start copying the data. But you’ll still need to do that if you’re planning on using lower-level APIs (EGL or platform-specific) to read the data directly, assuming that you want the data generated by prior rendering commands rather than whatever happens to have been rendered by the point you start reading.

To minimise the cost of synchronisation, leave as much time as possible between rendering and requesting the results of rendering. This typically involves having multiple render targets so that you can render to target 1, render to target 2, read from target 1. Even if it has to wait for the first step to complete, the pipeline still has the commands for the second step pending so the GPU isn’t sitting idle until you finish reading the data and start issuing more commands.

If you don’t need to use the texture as a texture (i.e. access it via a sampler2D uniform in a shader), a renderbuffer might be more efficient…

Thanks a lot for the elaborations and i will try the two pass thing you suggest as I already have that set up in my code.

But I’ll also explain a bit more why I feel I shouldn’t have to copy the pixel data.
I’m working on a Raspberry Pi4b and on these both the gpu and the cpu are using the same physical memory. But setting the pi up, you allocate a certain amount of memory for the GPU.
Further to that there are different ‘sections’ of memory that are being used in different ways. A short quote from one of the knowledgeable raspberry engineers on the pi forum who deals a lot with low level display and camera drivers:

“On Pi4, OpenGL allocations can be from any system memory for textures, although the rendered output has to be from the CMA heap if you want to display it, and hence in the bottom 1GB.”

Now since my goal is not to display the final pixels but use them in a computer vision program (slam) I am using off screen fbo textures to render to. And therefore I’m hoping that the rule of ‘if you want to display it’ does not apply and the memory is not in the CMA heap and because of that that I could access it directly from a cpu process.

You mention that ‘lower level API’ might be able to access the data directly? Would you be able to give some more explanation on that?

Well, EGL has the EGL_KHR_lock_surface3 extension (and earlier versions) which allows an EGLSurface to be mapped to client memory. There may also be platform-specific mechanisms (I’m not familiar with the Pi).

Ultimately, you’ll probably be better off asking on a Pi-specific forum. Any solution will involve mechanisms outside of the OpenGL API.

It doesn’t matter what you “feel”. If the API says “you have to copy pixel data.” then that’s what you have to do. It’s not a question of “possible” in the ultimate hardware sense, but of what the API allows.

No form of OpenGL has a mechanism that allows you to just get a pointer to a texture’s pixel data. That such pointers definitely exist somewhere in the driver and point to memory that is definitely accessible by the CPU is irrelevant; you can’t get them through OpenGL.

Vulkan provides a possible mechanism for doing so via linear textures. But hardware is allowed to impose pretty arbitrary restrictions on how linear textures can be used, what formats are allowed with them, and so forth.

Thank you for that tip!
I’ll check that out now!

Cheers!