eglInitialize occasionally fails with EGL_BAD_ALLOC

In my Ubuntu 18 x64 VM, when I start an OpenGL ES app it occasionally fails with EGL_BAD_ALLOC in eglInitialize:

libEGL debug: Native platform type: x11 (build-time configuration)
libEGL debug: EGL user error 0x3003 (EGL_BAD_ALLOC) in eglInitialize: xcb_connect failed
libEGL debug: EGL user error 0x3001 (EGL_NOT_INITIALIZED) in eglInitialize

I echoed the $DISPLAY environment variable to see if something is there and it is set.

Can anyone recommend me further debugging measures (xcv/x11 debug log environment variables or something like that?) to get more info?

I already use EGL_LOG_LEVEL=debug , MESA_DEBUG=1 and WAYLAND_DEBUG=1

Regards

Ok, so your application is failing to connect to the X Server via EGL → Xcb.

Note: Xcb = “X11 C Binding” layer, and xcb_connect() (typically the 1st app Xcb call made) tries to setup a connection to the X server.

Ok, let’s stop right there. You’re going to want to follow the directions applicable to the virtual machine (VM) and/or container software you’re running to properly set up support GPU-accelerated rendering via EGL and X11 within that VM/container.

Which VM / container software are you running, and on top of which native GPU and drivers?

And what X server are you trying to connect to (often indicated by your $DISPLAY env var or -display command-line option). Have you opened up access on this X server (if needed) so that clients from your location are allowed to connect?

Cross-ref:
https://stackoverflow.com/questions/69658874/eglinitialize-occasionally-fails-with-egl-bad-alloc
http://5.9.10.113/69658874/eglinitialize-occasionally-fails-with-egl-bad-alloc

Hello, I finally was able to obtain an xdpyinfo output right before the crashing program starts:

xdpyinfo name of display:    :5
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    11906000
X.Org version: 1.19.6
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    6
supported pixmap formats:
    depth 1, bits_per_pixel 1, scanline_pad 32
    depth 4, bits_per_pixel 8, scanline_pad 32
    depth 8, bits_per_pixel 8, scanline_pad 32
    depth 16, bits_per_pixel 16, scanline_pad 32
    depth 24, bits_per_pixel 32, scanline_pad 32
    depth 32, bits_per_pixel 32, scanline_pad 32
keycode range:    minimum 8, maximum 255
focus:  PointerRoot
number of extensions:    25

It’s the same for the crashing and successful runs.

and beginning for glxinfo:

xdpyinfo name of display:    :5
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    11906000
X.Org version: 1.19.6
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    6
supported pixmap formats:
    depth 1, bits_per_pixel 1, scanline_pad 32
    depth 4, bits_per_pixel 8, scanline_pad 32
    depth 8, bits_per_pixel 8, scanline_pad 32
    depth 16, bits_per_pixel 16, scanline_pad 32
    depth 24, bits_per_pixel 32, scanline_pad 32
    depth 32, bits_per_pixel 32, scanline_pad 32
keycode range:    minimum 8, maximum 255
focus:  PointerRoot
number of extensions:    25

Problem persists. Anyone has an idea? Is eglTerminate necessary to guarantee cleanup of external memory or will process shutdown guarantee that?