Offscreen Rendering using OpenGL and EGL

Uh, what? EGL is a mechanism to set up rendering contexts for (primarily) OpenGL ES. It fulfils largely the same role as GLX/WGL for desktop OpenGL.

On a headless system, you can use either Xvnc (a software-only X server) or X11vnc (a module which loads into a hardware-accelerated X server) to create a display which is accessed remotely via a VNC client. The performance of VNC is likely to be much worse than using a remote X server, but you aren’t limited to OpenGL 1.x.

Thank you @GClements for the reply, good information. My idea is to create an image that is not rendered to a given window but rather to something like a frame buffer that will be saved as a PPM image file, I arbitrarily picked a triangle because it seemed like the easiest thing. I wanted to use EGL so I wouldn’t be tied to a particular windowing API - is this an incorrect assumption?

Would something like the following work, in say a Linux environment (with no display):

static const EGLint configAttribs[] = {
        EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
        EGL_BLUE_SIZE, 8,
        EGL_GREEN_SIZE, 8,
        EGL_RED_SIZE, 8,
        EGL_DEPTH_SIZE, 8,
        EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
        EGL_NONE
    };
    
    // Set the desired pixel buffer configuration
    static const EGLint pbufferAttribs[] = {
        EGL_WIDTH, static_cast<int>(width),
        EGL_HEIGHT, static_cast<int>(height),
        EGL_NONE,
    };
    // Initialize EGL
    EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);
    EGLint major, minor;
    eglInitialize(eglDpy, &major, &minor);

    // Choose the first EGL configuration that matches our requirements
    EGLint numConfigs;
    EGLConfig eglCfg;
    eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

    // Create a pixel buffer surface
    EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg, pbufferAttribs);
    eglBindAPI(EGL_OPENGL_API);

The above is not complete, obviously, but I wanted to know if this is a good start, I would like to not use GLES if possible.

Thanks again for the help.

No. EGL requires a display even if you’re rendering to an off-screen buffer.

You want to render on a headless (no monitor) linux machine, with no X11 display.

How are you going to view the rendered results?

Are you logged in directly to the linux machine via tty? Are you logged into it remote via SSH? Can you tunnel X11? Does the remote machine have a GPU? What vendor? Do you have admin control of that machine? Do you hope to have OpenGL rendering on a GPU or completely in software on the CPU?

Need more details on your setup and what you’re really trying to accomplish here. There are various solutions, but it’s unclear what you’re even trying to do at this point.

From the original post:

“Display” refers to an X Windows display, typically made available to the system by an X server process running on the machine. With that clarification…

Rendering via EGL + OpenGL on a Linux system “without” an X server running (and thus no advertised X displays) “is” supported on Linux systems running with NVIDIA GPUs using:

Last time I tried it, this works well. You might need to add the user to the video group to ensure they can open some kernel device files. Details here:

However, it’s unclear from the original post above whether this would meet the needs of the poster. Need more info.

Thank you for the fast reply. Sorry my questions are ambiguous.

I would like to render offscreen via EGL + OpenGL on a Linux system “without” an X server running. The link looks like a possible solution, however I can’t assume an NVIDIA GPU (say could be AMD). Is this still possible?

Ok. Then you may be looking at running against something like Mesa3D or ANGLE, built to render OpenGL in software on the CPU rather than by making use of the native GPU.

I am looking at something like Mesa, but I would like to render OpenGL on GPU if possible.

However, I am beginning to think that without using NVIDIA GPU using EGL as part of this may not be possible :frowning:

If GPU rendering is not available, is there a viable option when one wants to render offscreen using OpenGL (no display available) in Linux? If so, can you point me to some simple code that does an offscreen rendering of triangle with shaders?

Thank you

Bear in mind that most current Intel CPUs include an integrated GPU.

The main issue with using a GPU on a server is arbitration: GPUs are less amenable to being shared amongst multiple processes than CPUs. This is normally managed by the X server, but you’re typically limited to one X server accessing the GPU at a time, and X authentication normally restricts access to the X server to a specific account (allowing multiple accounts to access an X server is a security risk, as X doesn’t associate “ownership” of data with specific clients).

For a solution which doesn’t involve an X display, Mesa (which is the core of Linux’ OpenGL implementation) has the ability to render directly to a block of memory via the OSMesa (off-screen Mesa) library. See the GL/osmesa.h header; the library is libOSMesa. Questions about OSMesa should be directed to a Mesa-specific forum.

Cool. Thanks @GClements

I know this is not a mesa forum, but can you tell me if libOSMesa is CPU rendering only? I think I read somewhere in passing that it may be.

Thanks again.

Actually, I think I will stick with using OpenGL EGL with GLES2. The shaders and rendering on my Linux desktop look great and I cannot see any issue at this time, of course I just created a simple triangle.

Can anyone tell me what kind of issues could occur using EGL and GLES2 in a Linux desktop environment that could potentially arise that one would not find in corresponding OpenGL 2.X? Meaning are there any salient short comings with regards to GLES2 as compared to OpenGL 2.X? If so, would GLES3.X be a better fit.

I apologize in advance if this question is a bit cloudy, but I don’t know enough at this point to ask a more concise question.

Thank you again for any help

OpenGL ES 2 on a desktop system will probably support much of the same functionality as desktop OpenGL. I.e. it will support far more than the minimum requirements of the ES 2 specification. This is an issue if you’re planning on developing for mobile devices as what works on a desktop won’t necessarily work on mobile.

E.g. ES 2 only requires 10 bits of precision in the fragment shader and 16 bits in the vertex shader; a desktop system will provide IEEE-754 single-precision (24 bits) for both. ES 2 doesn’t require specific texture colour depths; a system which only provides 5:5:5 and 4:4:4:4 would be acceptable. A desktop system is bound to support 8:8:8:8 and probably a lot more (OpenGL 2.x requires support for 16:16:16:16 formats)

As for shaders; you can implement the Phong illumination model and variations of it, but expecting the fragment shader to do anything beyond “lighting” is likely to run into problems. Anything advanced will be frustrated by absent functionality, restrictions, and/or the lack of precision.

1 Like

Cool, sounds great.

Regarding the more elaborate operations on fragment shaders, will something like ES 3.2 help?

Also, is the snippet following how I could go about using OpenGL ES 3.2 in my C++ code:

...
#include <EGL/egl.h>
#include <GLES3/gl32.h>
...
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);

eglInitialize(display, NULL, NULL);

EGLConfig config;
EGLint numConfigs;
EGLint attribList[] = {
    EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
    EGL_BLUE_SIZE, 8,
    EGL_GREEN_SIZE, 8,
    EGL_RED_SIZE, 8,
    EGL_DEPTH_SIZE, 8,
    EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
    EGL_NONE
};
eglChooseConfig(display, attribList, &config, 1, &numConfigs);
EGLint contextAttribs[] = {
    EGL_CONTEXT_CLIENT_VERSION, 3,
    EGL_NONE
};
EGLContext context = eglCreateContext(display, config, EGL_NO_CONTEXT, contextAttribs);
...

Thanks again for all the help, much appreciated.

ES 3 is much closer to desktop OpenGL.

One thing I notice; you’ll want:

    EGL_RENDERABLE_TYPE, EGL_OPENGL_ES3_BIT,

for ES 3.

1 Like

Thank you, it’s exactly the information that I needed.

Hi @oglnewbie , I’m a newbie in this field and I wish to do exactly the same thing as you did, and I plan to use EGL and GLES as well. Could you share your complete code for offscreen rendering the red triangle? That would help me quite a lot.
Thank you very much in advance!

Hello @fjiang9, I will give you the code that I got.

#include <EGL/egl.h>

#define EGL_EGLEXT_PROTOTYPES
/#define GL_GLEXT_PROTOTYPES
#include <EGL/eglext.h>
#include <GL/gl.h>

const int WIDTH = 512;
const int HEIGHT = 512;

const char* vertexShaderSource = 
    "#version 330 core\n"
    "layout (location = 0) in vec3 aPos;\n"
    "void main()\n"
    "{\n"
    "    gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n"
    "}\n";

const char* fragmentShaderSource =
    "#version 330 core\n"
    "out vec4 FragColor;\n"
    "void main()\n"
    "{\n"
    "    FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n"
    "}\n"

int main(int argc, char* argue[]) {
    EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
    EGLint major;
    EGLint minor;
    eglInitialize(display, &major, &minor);
    eglBindAPI(EGL_OPENGL_API);
    EGLint configAttribs[] = {
        EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
        EGL_RED_SIZE, 8,
        EGL_GREEN_SIZE, 8,
        EGL_BLUE_SIZE, 8,
        EGL_ALPHA_SIZE, 8,
        EGL_DEPTH_SIZE, 24,
        EGL_STENCIL_SIZE, 8,
        EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
        EGL_NONE
    };
    EGLConfig config;
    EGLint numConfigs;
    eglChooseConfig(display, configAttribs, &config, 1, &numConfigs);
    EGLint contextAttribs[] = {
        EGL_CONTEXT_MAJOR_VERSION, 3,
        EGL_CONTEXT_MINOR_VERSION, 0,
        EGL_CONTEXT_OPENGL_PROFILE_MASK, EGL_CONTEXT_OPENGL_CORE_PROFILE_BIT,
        EGL_NONE
    };
    EGLContext eglContext = eglCreateContext(display, config, EGL_NO_CONTEXT, contextAttribs);
    EGLint surfaceAttribs[] = {
        EGL_WIDTH, static_cast<int>(width),
        EGL_HEIGHT, static_cast<int>(height),
        EGL_NONE
    };
    EGLSurface surface = eglCreatePbufferSurface(display, config, surfaceAttribs);
    eglMakeCurrent(display, surface, surface, eglContext);
 
    uint32_t vertex, fragment;

    // Compile vertex shader
    vertex = glCreateShader(GL_VERTEX_SHADER);
    glShaderSource(vertex, 1, & vertexShaderSource, NULL);
    glCompileShader(vertex);

    // Compile fragment shader
    fragment = glCreateShader(GL_FRAGMENT_SHADER);
    glShaderSource(fragment, 1, & fragmentShaderSource, NULL);
    glCompileShader(fragment);

    uint32_t program = 0;
    program_id = glCreateProgram();
    glAttachShader(program, vertex);
    glAttachShader(program, fragment);
    glLinkProgram(program);

    GLfloat vertices[] = {
        0.0f,  0.5f, 0.0f,
        -0.5f, -0.5f, 0.0f,
        0.5f, -0.5f, 0.0f,
    };

    GLuint vbo;
    glGenBuffers(1, &vbo);
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
    glDepthMask (GL_TRUE);
    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT);

    glUseProgram(program);
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);

    glDrawArrays(GL_TRIANGLES, 0, 3);

    std::vector<GLubyte> pixels(width * height * 4);
    glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());

    // At this point you can same the triangle pixel data as an image file, e.g., PNG

    eglDestroyContext(display, eglContext);
    eglDestroySurface(display, surface);
    eglTerminate(display);

    return 0;
}

The above code does not have any error checks, which it should, but it is something to get you started.

It seems I had some driver issues and got a bit off track with my code.

To clarify my issue, because I have been too vague.

I would like to use OpenGL EGL with OpenGL 3.X to render an offscreen triangle and save it to an image file such as PNG and I am using Ubuntu 22.04. I do not have X server running on the machine, so I would like EGL to create a context that I can then use with OpenGL 3.X to create the triangle using shaders and such. After this I would like to read the pixels to a array for building an image file (PPM or PNG).

I have read that EGL can operate in cases where there is no X server running but I need to use platform device extension. Will the following work to properly initialize EGL?

#include <EGL/egl.h>
#include <EGL/eglext.h>
#include <GL/gl.h>
#include <iostream>
...
// Get function pointers for EGL extensions
PFNEGLQUERYDEVICESPROC eglQueryDevicesEXT = reinterpret_cast<PFNEGLQUERYDEVICESPROC>(eglGetProcAddress("eglQueryDevicesEXT"));
PFNEGLGETPLATFORMDISPLAYEXTPROC eglGetPlatformDisplayEXT = reinterpret_cast<PFNEGLGETPLATFORMDISPLAYEXTPROC>(eglGetProcAddress("eglGetPlatformDisplayEXT"));

if (!eglQueryDevicesEXT || !eglGetPlatformDisplayEXT) {
     std::cerr << "Failed to get function pointers for EGL extensions" << std::endl;
        return 1;
}

EGLDisplay eglDisplay;
EGLContext eglContext;
EGLSurface eglSurface;

// Initialize EGL display
EGLDeviceEXT device;
EGLint numDevices;
eglQueryDevicesEXT(0, NULL, &numDevices);
EGLDeviceEXT* devices = new EGLDeviceEXT[numDevices];
eglQueryDevicesEXT(numDevices, devices, &numDevices);
device = devices[0];
delete[] devices;

EGLint attribs[] = { EGL_NONE };
eglDisplay = eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT, device, attribs);
eglInitialize(eglDisplay, NULL, NULL);
...

I would then like to use the above context to use OpenGL 3.X. Is this correct?

Thank you again for any help, and once again I apologize for any confusion my previous postings may have caused.

We’ve pretty much been down this road twice. So just to summarize:

  • The EGL spec supports this. That’s not an obstacle. So…
  • Getting this support depends on:
    1. Whether, for all GPU vendors you care about, the EGL library you’re using provides GPU rendering support w/o an X server for that GPU vendor, and
    2. The admin of the remote machine is willing to give you access to use this method.

Re #1, only you know the set of GPU vendors (and GPU model numbers per vendor) that you’d like to support. So you are going to have to do this investigation. NVIDIA supports this, for their GPUs and graphics drivers. I don’t know about the rest.

Re #2, you also are going to have to explore this as you know who the admin is.

Try it! Whether it works still depends on #1 and #2 above.

Thank you @Dark_Photon for all the help and patience with my questions.

The code I am developing is expected to run across multiple GPU devices (AMD, Nvidia, etc.), so getting this to properly execute is probably not going to be consistent. My machine has Nvidia Quadro P4000, so should work in this case but who knows about the other GPUs.

I will give the code a try when I get back and see what happens though.