How to bind a frameBuffer to an GL_TEXTURE_EXTERNAL_OES texture?

Ok, I did the following now:

    glGenFramebuffers(1, &frameBuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);

    glGenTextures(1, &externalTexture);

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_EXTERNAL_OES, externalTexture);
    glTexImage2D(GL_TEXTURE_EXTERNAL_OES, 0, GL_RGBA, decodedNvFrame->width, decodedNvFrame->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);

    glUniform1i(texLocation, 0);
    glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, externalTexture, 0);

but I still end up with the glFramebufferTexture2D error.

I also tried putting GL_TEXTURE_EXTERNAL_OES in glFramebufferTexture2D even though the documentation doesn’t say this option is possible, I still get the error

Side note: Just happened to notice this related extension out there:

which adds REPEAT, MIRRORED_REPEAT, and CLAMP_TO_BORDER (if OES_texture_border_clamp).

Looks like EXT_YUV_target adds this.

According to OES_EGL_image_external:

Looks like you need to use glEGLImageTargetTexture2DOES(), and this takes GL_TEXTURE_2D.

Here are a few posts that appear related to what you are doing:

thank you so much, your answer helped me a lot

I did

    glGenFramebuffers(1, &frameBuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
    glGenTextures(1, &externalTexture);
    glBindTexture(GL_TEXTURE_2D, externalTexture);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, externalTexture, 0);


	glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
	
	EGLImageKHR hEglImage;

	EGLSyncKHR eglSync;
	int iErr;
	hEglImage = NvEGLImageFromFd(eglDisplay, decodedNvFrame->nvBuffer->planes[0].fd);
	if (!hEglImage)
		printf("Could not get EglImage from fd. Not rendering\n");

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, externalTexture);
	glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, hEglImage);

	glReadBuffer(GL_COLOR_ATTACHMENT0);

	glReadPixels(0, 0, 512, 512, GL_RED, GL_UNSIGNED_BYTE, r);
	assertOpenGLError("268");

	for (int i = 0; i < 50; ++i)
	{
		printf("%i ", r[i]);
	}
	printf("\n");

and I got some ‘random’ numbers at the screen, so it looks like it worked: my frame buffer got the image from the external texture. However, when the same code is called from the GDK signal_render, I get a 1286 error at glReadPixels. I think it has something to do with the frame buffer of GTK messing with my own frameBuffer. I thought that binding my frameBuffer would make GTK framebuffer go away, but it didn’t.

Do you know what’s heppening?

also, in the documentation it doesn’t say glEGLImageTargetTexture2DOES accepts GL_TEXTURE_2D

Ok, I’ve read through all the things you sent

I wrote a new code now that I learned more how opengl works. What I think this code does is the following:

Creates two textures, frameBufferTexture and externalTexture. We write our EGLImage to externalTexture and draw to frameBufferTexture from externalTexture. Then we bind frameBufferTexture to read from it with glReadPixels:

    glGenFramebuffers(1, &frameBuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
    //externalTexture is the texture I'm gonna send the eglImage
    glGenTextures(1, &externalTexture);
    //bufferTexture is where we're going to render to
    glGenTextures(1, &frameBufferTexture);
    //Lines below create an empty texture with our image size (I'm assuming its RGBA, I don't know)
    glBindTexture(GL_TEXTURE_2D, externalTexture);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, decodedNvFrame->width, decodedNvFrame->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
    //Tells the shader which texture unit we're using
    glUniform1i(texLocation, 0);
    //Don't know what these are for, exactly
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    //Binds our frameBuffer because we're going to write to it
    glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
    glBindTexture(GL_TEXTURE_2D, frameBufferTexture);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, frameBufferTexture, 0);
    
    //Activate externalTexture because we want to put our hEglImage image there
    glBindTexture(GL_TEXTURE_2D, externalTexture);
    EGLImageKHR hEglImage;
    EGLSyncKHR eglSync;
    hEglImage = NvEGLImageFromFd(eglDisplay, decodedNvFrame->nvBuffer->planes[0].fd);
    //Puts hEglImage into externalTexture?
    glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, hEglImage);
    glBindVertexArray(vertexArrayObject);
    //Draw to our frame buffer using the texture from externalTexture
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    //Bind frameBufferTexture because we're going to read data from it with glReadPixels
    glBindTexture(GL_TEXTURE_2D, frameBufferTexture);
    //-------------
    glReadBuffer(GL_COLOR_ATTACHMENT0);
    //Just pick a small 512x512 image so we can see in the console if numbers appear 'random'
    glReadPixels(0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, r);

    for (int i = 0; i < 100; ++i)
    {
        printf("%i ", r[i]);
    }
    printf("\n");

The problem is that I get error at glReadPixels

Here’s the fragment shader:

    #version 330 core
    out vec4 FragColor;
    
    in vec2 TexCoord;
    uniform sampler2D tex;
    
    void main()
    {
    	FragColor = texture(tex, TexCoord);
    }

OES_EGL_image:

Thanks for this description. Comparing this with your code suggests that there’s at least one error.

Actually, while you generate texture handles for both of these, you don’t allocate any texture storage for frameBufferTexture and you redefine the storage for externalTexture twice.

I believe you want to change this line as follows:

You’re apparently trying to define the texture backing the FBO’s COLOR_ATTACHMENT0, frameBufferTexture, but you didn’t bind its handle but bound externalTexture instead.

This leaves your FBO in an incomplete state, which probably explains why your glReadPixels (and probably your glDrawArrays) as well failed.

You should check for Framebuffer Completeness before you start rendering to an FBO. Also, Check for OpenGL Errors.

You don’t need the glBindTexture here. glReadPixels reads from the current READ_BUFFER in the current READ_FRAMEBUFFER. In your case, that’s the COLOR_ATTACHMENT0 buffer (which you defined via glReadBuffer()) for frameBuffer (which is the last READ_FRAMEBUFFER you’ve bound; in your case, via glBindFramebuffer()).

So basically just remove the glBindTexture() call here. Though it shouldn’t cause you any problems, it’s not necessary in setting up for the glReadPixels() call.

Also, a suggestion:

When you define the storage for frameBufferTexture, I’d suggest being specific about which format you want by replacing the first GL_RGBA with GL_RGBA8. For details, see glTexImage2D(). This addresses your comment as well.

Thanks, based on your answer, I improved the code. Indeed I was binding the wrong texture. Now I can read from the frame buffer without errors.

I get, in the output,

0 0 0 255 0 0 0 255, ...

Which is the glTexImage2D that I did in the beggining. That means glDrawArrays is failing to write to the frameBuffer. However I didn’t get any opengl error this time. Even the frame buffer is complete, according to the test I added.

    glGenFramebuffers(1, &frameBuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
    glGenTextures(1, &externalTexture);
    glGenTextures(1, &frameBufferTexture);
    glBindTexture(GL_TEXTURE_2D, frameBufferTexture);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, decodedNvFrame->width, decodedNvFrame->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
    glUniform1i(texLocation, 0);
    
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

	glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
	glBindTexture(GL_TEXTURE_2D, frameBufferTexture);
	
	EGLImageKHR hEglImage;

	hEglImage = NvEGLImageFromFd(eglDisplay, decodedNvFrame->nvBuffer->planes[0].fd);
	if (!hEglImage)
		printf("Could not get EglImage from fd. Not rendering\n");
	glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, frameBufferTexture, 0);		
	glBindTexture(GL_TEXTURE_2D, externalTexture);
	glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, hEglImage);
	glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
	glUniform1i(texLocation, 0);
	glBindVertexArray(vertexArrayObject);
		
	glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

	glReadBuffer(GL_COLOR_ATTACHMENT0);

	GLenum frameBufferStatus = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if (frameBufferStatus!=GL_FRAMEBUFFER_COMPLETE) {
		printf("frameBufferStatus problem!\n");
		abort();
	}
	glReadPixels(0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, r);

	for (int i = 0; i < 100; ++i)
	{
		printf("%i ", r[i]);
	}
	printf("\n");

	NvDestroyEGLImage(eglDisplay, hEglImage);

Do you have any suggestions?

Again: thank you so much!!! You’re helping me a lot!

The first thing I would try is (after binding your framebuffer for drawing), do set a glClearColor() and do a glClear() of the color buffer. Make sure that your glReadPixels() produces the clear color.

If that works, try creating and plugging in a standard OpenGL 2D texture (with known content) in place of your EGLImage-wrapper 2D texture. See if that works. If not…

Then it just boils down to getting your state setup to render that draw call properly. A few things to check: shader program successfully linked and bound, vertex shader being fed properly so that it produces on-screen positions, color writes enabled, etc.

        glGenFramebuffers(1, &frameBuffer);
        glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
        glGenTextures(1, &externalTexture);
        glGenTextures(1, &frameBufferTexture);
        glBindTexture(GL_TEXTURE_2D, frameBufferTexture);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, decodedNvFrame->width, decodedNvFrame->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
        glUniform1i(texLocation, 0);
    
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

	glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
	glBindTexture(GL_TEXTURE_2D, frameBufferTexture);
	
	EGLImageKHR hEglImage;

	hEglImage = NvEGLImageFromFd(eglDisplay, decodedNvFrame->nvBuffer->planes[0].fd);
	if (!hEglImage)
		printf("Could not get EglImage from fd. Not rendering\n");
	glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, frameBufferTexture, 0);		
	glBindTexture(GL_TEXTURE_2D, externalTexture);
	//-----------------------------------------------------
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, d);
	//-----------------------------------------------------
	glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, hEglImage);
	glUniform1i(texLocation, 0);
	glBindVertexArray(vertexArrayObject);
		
	glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

	glReadBuffer(GL_COLOR_ATTACHMENT0);

	GLenum frameBufferStatus = glCheckFramebufferStatus(GL_FRAMEBUFFER);
        if (frameBufferStatus!=GL_FRAMEBUFFER_COMPLETE) {
		printf("frameBufferStatus problem!\n");
		abort();
	}
	glReadPixels(0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, r);

	for (int i = 0; i < 100; ++i)
	{
		printf("%i ", r[i]);
	}
	printf("\n");

	NvDestroyEGLImage(eglDisplay, hEglImage);

Take a look at the dot inserted lines. If I compile without them, I get the 0 0 0 255 result. If I add those 5 lines, I get, in the output, the exact contents of d. This means that the problem is that glEGLImageTargetTexture2DOES fails to write my image to externalTexture, and therefore my fragment shader draws from an empty externalTexture. When a glTexImage is added, it draws from a filled externalTexture. So the shader is indeed working, and the problem is in the glEGLImageTargetTexture2DOES call.

When I said glEGLImageTargetTexture2DOES doesn’t accept TEXTURE_2D, you pointed me an extension: https://www.khronos.org/registry/OpenGL/extensions/OES/OES_EGL_image.txt which says it can be TEXTURE_2D. Shouldn’t I need to use this extension in my fragment shader then? If so, how should I add it? That’s the only thing I can think now.

Do you have any other ideas?

I’m suspecting of the eglDisplay in NvEGLImageFromFd. I don’t get why I need a display at all. Do you understand why? In NVIDIA example it passed a display from an X11 window, or something like that. I’m not intersted (rigth now) in rendering to a display, I want to render to a frame buffer.

In my example, I’m initializing eglDisplay like this:

eglDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY);
assertEGLError("eglGetDisplay");

eglInitialize(eglDisplay, nullptr, nullptr);
assertEGLError("eglInitialize");

eglChooseConfig(eglDisplay, nullptr, &eglConfig, 1, &numConfigs);
assertEGLError("eglChooseConfig");

eglBindAPI(EGL_OPENGL_API);
assertEGLError("eglBindAPI");

eglContext = eglCreateContext(eglDisplay, eglConfig, EGL_NO_CONTEXT, NULL);
assertEGLError("eglCreateContext");

//surface = eglCreatePbufferSurface(eglDisplay, eglConfig, nullptr);
//assertEGLError("eglCreatePbufferSurface");

eglMakeCurrent(eglDisplay, EGL_NO_SURFACE, EGL_NO_SURFACE, eglContext);
assertEGLError("eglMakeCurrent");

this code I grabbed from here: egl_offscreen_opengl/egl_opengl_test.cpp at master · svenpilz/egl_offscreen_opengl · GitHub

Even though this example render to a frame buffer, it uses an egl display. I don’t know why. This eglDisplay thing looks important

I answered that question here:

While EGLDisplay has “display” in the name of the type, don’t interpret its scope as limited to that.

EGLDisplay is basically your connection to the graphics software or driver underlying EGL (graphics device interface, graphics server, etc.) to which rendering commands are submitted. In cases where rendering is performed by a GPU, it may also imply a specific GPU on which that rendering will be performed.

The analog in X Windows (on UNIX/Linux) is an X Display type:

The analog for MS Windows is a DC (i.e. Device Context):

The EGL Spec’s language is more abstract here, but the concept is the same:

  • EGLDisplay - Most EGL calls include an EGLDisplay parameter. This represents the abstract display on which graphics are drawn. … All EGL objects are associated with an EGLDisplay, and exist in a namespace defined by that display. Objects are always specified by the combination of an EGLDisplay parameter with a parameter representing the handle of the object.

Each of these are needed to create a “GL context” on the associated platforms, with which you can render via OpenGL or OpenGL ES:

  • EGL: EGLDisplayEGLContext
  • X Windows: DisplayGLXContext
  • MS Windows: DCGLRC

So I get that you don’t want to render on a physical display. However, to use the underlying graphics software/driver (possibly backed by a GPU), you still need a connection to it. Under EGL, that’s an EGLDisplay.

Check out this article for how to obtain an EGLDisplay for offscreen-only rendering only which is backed by the GPU of your choice:

Notice that he says this method does not connect to the X server (which on Linux provides rendering access to physical displays on the system, if any).

As you can see, this demonstrates how to render via EGL and OpenGL (or OpenGL ES) without an EGLSurface (on-screen window, pbuffer, or pixmap), on NVidia drivers at least. But notice that you still need an EGLDisplay and an EGLContext for rendering here, even though there are no physical displays or display server running on the system. This is to provide access to the graphics driver and (behind it) a specific GPU.

Related to your desire to render without an EGLDisplay… In the EGL spec, you do find this footnote under eglMakeCurrent():

So, check your EGL implementation to see if it’s supports this. If not, it probably needs the display for a reason.

After analyzing the link you sent me: Trying to process with OpenGL an EGLImage created from a dmabuf_fd - Jetson TX1 - NVIDIA Developer Forums

I found that the only thing I was doing different was the egl surface. I was not creating a surface at all, because I thought it wasn’t needed.

After doing this:

const EGLint context_attrib_list[] = {
	EGL_CONTEXT_CLIENT_VERSION, 2,
	EGL_NONE
};
const EGLint pbuffer_attrib_list[] = {
	EGL_WIDTH, 2304,
	EGL_HEIGHT, 1296,
	EGL_NONE,
};
const EGLint config_attrib_list[] = {
	EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
	EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
	EGL_NONE
};


EGLDisplay egl_display;
EGLSurface egl_surface;
EGLContext egl_context;
EGLConfig egl_config;
EGLint matching_configs;

eglDisplay = eglGetDisplay ((EGLNativeDisplayType) 0);
eglInitialize (eglDisplay, 0, 0);
eglBindAPI (EGL_OPENGL_ES_API);
eglChooseConfig (eglDisplay, config_attrib_list, &egl_config, 1, &matching_configs);
egl_surface = eglCreatePbufferSurface (eglDisplay, egl_config, pbuffer_attrib_list);
egl_context = eglCreateContext (eglDisplay, egl_config, EGL_NO_CONTEXT, context_attrib_list);
eglMakeCurrent (eglDisplay, egl_surface, egl_surface, egl_context);

It kinda wored for me to render to the frame buffer I guess. However, the data I’m reading is like this:

49 49 49 255 52 52 52 255 57 57 57 255 60 60 60 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 55 55 55 255 55 55 55 255 55 55 55 255 55 55 55 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 55 55 55 255 55 55 55 255 55 55 55 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 56 56 56 255 55 55 55 255 55 55 55 255 55 55 55 255 55 55 ...

As you can see, the R,G and B always repeat, and the alpha is always 255. I don’t thin this is rigth and I can’t find an explanation. I’ve filled my frame buffer with RGBA data:

         glBindTexture(GL_TEXTURE_2D, frameBufferTexture);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

and I’m reading as RGBA:

glReadPixels(0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, r);

Also, another big problem I’m having is that this code only works when called outside GTK. If it’s called from GTK’s signal_render signal, I get:

0 0 0 255 0 0 0 255 ...

which suggests NvEGLImageFromFd or glEGLImageTargetTexture2DOES aren’t filling the texture and I’m reading from a blank texture.

So could it be that GTK is messing with my eglMakeCurrent and therefore the surface being overwritten? Since I’ve found that a pbuffer surface is needed for NvEGLImageFromFd or glEGLImageTargetTexture2DOES to work, then the GTK failure suggests that some of the EGL context things are being overwritten.

According to another link I mentioned above:

an EGLSurface is apparently not necessary with NVidia drivers for the case where you’re just rendering to an FBO. See the bottom section of that page under Managing your own OpenGL resources.

But since you’ve got it mostly working with a PBuffer, there’s really little reason to change this.

This allocates the storage assigned to the texture, but the contents are undefined.

So by “I’ve filled my frame buffer with RGBA data”, I gather that your FBO render (where you read from the externalTexture and write to the frameBufferTexture) is what’s doing this, right?

Is this not the content of your decodedNvFrame? You might changing the content of decodedNvFrame to see if it’s having some effect on the glReadPixels() result. Also (separately) try commenting out the glDrawArrays() call and just replace it with a glClear( GL_COLOR_BUFFER_BIT ); see if you see a corresponding change in the glReadPixels() result.

I would set GTK aside until you get the non-GTK solution fully working. As I recall, GTK does create and bind its own contexts and framebuffer objects (e.g. see GtkGLArea) so separately you’ll need to get your head into how to make it do what you want.

GTK itself doesn’t know about OpenGL. But if you’re using a GtkGLArea, that will call gtk_gl_area_make_current with its own context prior to emitting the render signal, and possibly at other times.

In general, you shouldn’t assume that your EGL context will remain current after your callbacks return. A GtkGLArea has its own context and its internals will make that context current whenever it needs to.

It apparently depends on what you mean when you say “GTK”. GTK the-repo (knows about GL), GTK the-implementation (makes use of GDK/GL), etc.

> git clone https://gitlab.gnome.org/GNOME/gtk
...
> find . -type f | xargs grep MakeCurrent
./gdk/wayland/gdkglcontext-wayland.c:        eglMakeCurrent(display_wayland->egl_display, EGL_NO_SURFACE, EGL_NO_SURFACE,
./gdk/wayland/gdkglcontext-wayland.c:      eglMakeCurrent(display_wayland->egl_display, EGL_NO_SURFACE, EGL_NO_SURFACE,
./gdk/wayland/gdkglcontext-wayland.c:  if (!eglMakeCurrent (display_wayland->egl_display, egl_surface,
./gdk/wayland/gdkglcontext-wayland.c:      g_warning ("eglMakeCurrent failed"); ./gdk/win32/gdkglcontext-win32.c:        wglMakeCurrent (NULL, NULL);
./gdk/win32/gdkglcontext-win32.c:      if (best_pf == 0 || !wglMakeCurrent (dummy.hdc, dummy.hglrc))
./gdk/win32/gdkglcontext-win32.c:          wglMakeCurrent (hdc_current, hglrc_current);
./gdk/win32/gdkglcontext-win32.c:      wglMakeCurrent (hdc_current, hglrc_current);
./gdk/win32/gdkglcontext-win32.c:  if (best_idx == 0 || !wglMakeCurrent (dummy.hdc, dummy.hglrc))
./gdk/win32/gdkglcontext-win32.c:  wglMakeCurrent (NULL, NULL);
./gdk/win32/gdkglcontext-win32.c:  if (!wglMakeCurrent (hdc, hglrc_legacy))
./gdk/win32/gdkglcontext-win32.c:          wglMakeCurrent (hdc_current, hglrc_current);
./gdk/win32/gdkglcontext-win32.c:      if (!wglMakeCurrent (hdc, hglrc_base))
./gdk/win32/gdkglcontext-win32.c:          wglMakeCurrent (NULL, NULL);
./gdk/win32/gdkglcontext-win32.c:      wglMakeCurrent (hdc_current, hglrc_current);
./gdk/win32/gdkglcontext-win32.c:      wglMakeCurrent(NULL, NULL);
./gdk/win32/gdkglcontext-win32.c:  if (!wglMakeCurrent (context_win32->gl_hdc, context_win32->hglrc))

putting glClear actually gives me 0 in everything, which is good

I tried now and I’m gettind data like this (without the alpha):

79 76 69 77 74 67 83 80 73 66 63 56 78 75 68 80 77 70 84 81 74 86 83 76 76 73 66 82 79 72 68 65 58 73 70 63 75 72 65 75 72 65 75 72 65 76 73 66 76 73 66 77 74 67 77 74 67 77 74 67 65 62 55 85 82 75 78 75 68 52 49 42 74 71 64 74 71 64 74 71 64 74 71 64 74 71 64 74 71 64 74 71 64 76 73 66 79 76 69 80 77 70 76 73 66 75 72 65 80 77 70 79 76 69 75 72 65 75 72 65 76 73 66 77 74 67 80 77 70 82 79 72 89 86 79 93 90 83 65 62 55 55 52 45 52 49 42 41 38 31

this looks more random and more like an image from a camera. I guess it’s because I’m looking for an image at sunny day, I was looking yesterday at an image at night and in infrared mode, so I guess it was all almost monochromatic and it was the reason I was getting repeated RGB pixels, compression might me doing this ‘optimization’

I was going to rerender this image to screen but I remembered that when I use GTK the whole thing won’t work.

I tried inspecting the image before rendering, using Jetson Linux API Reference: Buffer Manager | NVIDIA Docs, as you suggested, but will give me strange results, not related to the frame buffer data I got

I then used c++ - How to take screenshot in OpenGL - Stack Overflow. I tried reading with GL_BGR_EXT but it wouldn’t work, I got all 0s. I then changed to GL_RGB and got an image (twisted R and G but anyways, it worked). Just would like to ask: I can’t read in a different format that is written into the frame buffer?

So, the frame buffer is rigth!

So, yes, I’m using GLArea and this might be the reason things aren’t working.

I tried calling eglMakeCurrent (eglDisplay, eglSurface, eglSurface, eglContext); everytime I render but it also didn’t work. The only thing that GTK seems to be messing with is the pbuffer. Don’t you have an idea on how to reattach the pbuffer somehow to the context?

Again, thank you so much by your help!!!

Glad to hear that you’re getting reasonable results now.

Re using the GL_RGBA8 internal texture format for your pass-through 2D texture and seeing flopped R and B, you can deal with this just by swapping the R and the G in your fragment shader when you read from the texture. You could also do a texel transfer from one GL texture to another if you wanted and flop it that way.

Re GL_BGR / GL_BGR_EXT, as I understand it this is not an internal texture format. Rather, this is an external format used to state what format you are storing the texels in someplace else. This is used for cases where you are loading texels into a GL texture from some other place or transferring the texels in a GL texture or framebuffer to some other place else to determine that “some other place” format (such as with glTexSubImage2D() or glReadPixels()).

OES_EGL_imageBecause with , you’re basically overlaying a GL internal texture format onto the EGL image data to interpret it, you’re limited to what internal formats are actually supported.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.