EGL context with Cg shader

I am trying to adjust some software for off-screen rendering on a multi-GPU cluster. No display output is needed as the framebuffer content is saved to file. Currently the GLX API is used to establish a connection to the Linux X11 system; however, this does not seem to work very well in handling multiple GPUs.

I then came across the article “EGL Eye: OpenGL Visualization without an X Server” on the Nvidia DevBlogs, which provides an appealing solution using the EGL API (instead of GLX & X11). First tests involving GLSL shaders proved successful.

However, the actual code relies on numerous legacy Cg shaders. And that is where the problems begin as the Cg framework does not seem compatible with the EGL context. It already fails at the attempt to select a Cg profile (ERROR: “unknown”).

The same code works w/o errors when using GLX; and the same EGL setup routine works when replacing Cg shaders by GLSL shaders.

Does anyone have experience combining EGL and Cg shaders and/ or knows if one can get this combination to work?

/* minimal (incomplete) code example */

// EGL headers
#define EGL_EGLEXT_PROTOTYPES
#include <EGL/egl.h>
#include <EGL/eglext.h>

// Cg headers
#include <Cg/cg.h>
#include <Cg/cgGL.h>

// -------------------------------------
// EGL setup (context + GPU connection)
// -------------------------------------

// Retrieving number of devices / GPUs

EGLDeviceEXT eglDevices[4];
ELGint numDevices;
eglQueryDevicesEXT(4, eglDevices, &numDevices);

// Setting and initializing "display"
// here: selecting GPU 0 with eglDevices[0]

EGLDisplay eglDpy = eglGetPlatformDisplayEXT( EGL_PLATFORM_DEVICE_EXT, eglDevices[0], 0);
eglInitialize( eglDpy, ...);

// "display" configuration

eglChooseConfig(eglDpy, {EGL_SURFACE_TYPE, EGL_PBUFFER_BIT, ... });

// binding API

eglBindAPI( EGL_OPENGL_API );

// creating context

EGLContext eglCtx = eglCreateContext( ... );
eglMakeCurrent( eglDpy, EGL_NO_SURFACE, EGL_NO_SURFACE, eglCtx );

// ---------------------------
// Framebuffer setup
// ---------------------------

unsigned int framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer( GL_FRAMEBUFFER, framebuffer );
...

// --------------------------
// CG shader setup
// --------------------------

CGcontext cgContext = cgCreateContext(); // no error
CGprofile cgVertexProfile = cgGLGetLatestProfile( CG_GL_VERTEX ); // "unknown" error

Hi qCat

I think cgGLGetLatestProfile wants a valid GLX context, but we have only EGL context(with Wayland on Unix platform), so in this case maybe you couldn’t use some Cg Runtime’s functions.

Anyway no problem! You can use Cg runtime with minimal usage. You need only cgCreateProgram and cgGetProgramString to directly get GLSL code from Cg compiler and create GLSL shaders/program using directly call OpenGL API.

Instead of cgGLGetLatestProfile you should use CG_PROFILE_GLSLV for vertex, CG_PROFILE_GLSLF for pixel shaders.
Simple code:

const char* vsCode = "...";
// create Cg Program from HLSL/Cg code
CGprogram cgVsProgram = cgCreateProgram(cgContext, CG_SOURCE, vsCode , CG_PROFILE_GLSLV, EntryPoint, &Args[0]);
// get GLSL code
const char* glslVsCode = cgGetProgramString(cgVsProgram , CG_COMPILED_PROGRAM);
const char* psShaderCode = "..."

CGprogram cgPsProgram = cgCreateProgram(cgContext, CG_SOURCE, psShaderCode , CG_PROFILE_GLSLV, EntryPoint, &Args[0]);

const char* glslVPCode = cgGetProgramString(CGprogram cgPsProgram , CG_COMPILED_PROGRAM);

// We can directly use OpenGL API
vsShader  = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vsShader, ..., glslVsCode,... );

psShader  = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(psShader  , ..., glslPsCode,... );
1 Like

Thank you for the suggestion! I wasn’t aware of the fact the Cg compiler can generate GLSL shader code.

I gave the method a try on a simplified (single shader) example. Both the conversion from Cg to GLSL code and the GLSL shader setup worked fine. However, making the rendering work was nevertheless a bit of a hassle. In particular, I ran into the following problems:

  • The automatic conversion to GLSL code changes the name of the uniforms:
    e.g.: Cg: uniform float myVar (transformed to) GLSL: uniform float _myVar1
    with some variables also getting a different suffix number (e.g. _myVar2).
    Therefore, this requires to adjust the variables. I didn’t find any good reference, which describes the conventions used during auto-conversion. Thus, the adjustment requires to inspect the resulting GLSL code.

  • One has to be careful when passing variables to the shader. In the Cg syntax an integer parameter would (as I understood) be passed through cgGLSetParameter1f(), which is the ‘float’ version as an integer setting function does not exist. However, in GLSL the analogue glUniform1f() does not seem to work for integers, and one should resort to the true integer version glUniform1i() instead. Thus, simply replacing cgGLSetParameter1f() by its GLSL counterpart might lead to problems.

As some care is required with the variable naming (and who really knows to what names they get converted to at runtime?), I believe a less error-prone method might be to actually create a GLSL shader file with the cgc-compiler and go from there:
cgc -profile glslv -p version=... -entry main <vertex shader filename>
cgc -profile glslf -p version=... -entry main <fragment shader filename>

Well, then this all comes down to transform the shaders and code to GLSL. It works, but it does not seem to be the “add some additional code”, “redefine some Cg functions” and then “everything works like a charm” approach neither. I am still quite hesitant tackling a larger project with many shaders …

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.