"NVIDIA releases OpenGL 3.2 beta drivers"

It’s sad, but the fact is, that D3D IS a better option, if you can afford to only support Windows (Vista).

Is there any other real platforms than Windows?

You’d be surprised.

I meant graphics platforms, well other than Apple. they have their own “API.”

Sorry, I think I’m at fault here for not understanding something.

I confess I didn’t realize the distinction between compatibility and forward-compatibile. I assumed that by not specifying forward-compatible that I got backward compatibility
[/QUOTE]

Ah, I see the confusion. Indeed, that assumption is incorrect.

(as was the case <= GL 3.1).

Not entirely correct either. If you create an OpenGL 3.1 context, you get backwards compatibility only if the ARB_compatibility extension is also supported.

Now I think I see the distinction:

  • PROFILE_MASK = COMPATIBILITY -> Backward compatibility included
  • PROFILE_MASK = CORE -> Backward compatibility killed
  • FLAGS = 0 -> Deprecated but still supported APIs included

With FLAGS = 0 you mean no parameters at all passed to CreateContextAttribsARB? If so, you get the highest supported version of OpenGL that is backwards compatible. If you mean with FLAGS = 0 that you leave out a PROFILE_MASK, then you get a Core profile (and you need to ask for GL 3.2 as well).

  • FLAGS = FORWARD_COMPAT -> Deprecated but still supported APIs killed

So for the most lax profile (include all the old stuff), you want PROFILE_MASK = COMPATIBILITY and no FLAGS = FORWARD_COMPAT. And for the most strict profile, you want PROFILE_MASK = CORE and FLAGS = FORWARD_COMPAT.

Yes. Although I am not sure it makes sense to ship an application with the forward compatible flag set. It is useful during development if you do not want to use deprecated features, and want to get warned (or get errors) if you accidentally do so. But that is up to you.

Barthold
(with my ARB hat on)

What do you mean ? The Apple platforms use OpenGL and OpenGL ES.

What do you think about Linux? Universities all over the world use Linux for scientific visualization for example. Their only option is to use OpenGL. Besides, I think the open source community is still growing as it still makes huge progress in terms of usability.

Nevertheless, I think it would be great if some OpenGL driver certification program would exist, because it would probably increase the quality of the drivers.

Mac, iPhone, Android, Symbian, Maemo, Linux, game consoles.

Plasma TVs, Blu-Ray players, and other consumer electronic devices. Many of which are running some version of an embedded Linux with OpenGLES.

With FLAGS = 0 you mean no parameters at all passed to CreateContextAttribsARB?[/QUOTE]
I mean calling it like this:


      static int Context_attribs[] =
      {
        GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
        GLX_CONTEXT_MINOR_VERSION_ARB, 2,
        None
      };
      Context = glXCreateContextAttribsARB( display, FBConfig, 0, 
                                            True, Context_attribs );

with no GLX_CONTEXT_FLAGS_ARB attrib.

If so, you get the highest supported version of OpenGL that is backwards compatible. If you mean with FLAGS = 0 that you leave out a PROFILE_MASK, then you get a Core profile (and you need to ask for GL 3.2 as well).

Ok, that part about defaulting to the highest version available implicitly is good to know.

Thanks, Barthold.

I have another NVidia OpenGL 3.2-related question that came up yesterday. Some GeForce 7 internal users were reporting that running our app (now using the new context creation API with 3.x capable drivers) was resulting in a fatal Xlib error when the GL context is being allocated (but only on pre-GeForce 8 cards!)

I’ve attached full source for a test prog below, along with the working (GeForce 8) and broken (GeForce 7) output.

In short, running on 190.18.03 drivers (Linux) on a GeForce 8+ advertises GL 3.2.0 (per GL_VERSION), whereas on a GeForce 7 it advertises GL 2.1.2 (per GL_VERSION). On both cards:

  1. ARB_create_context is supported,
  2. glXGetProcAddress( “glXCreateContextAttribsARB” ) returns a valid pointer, and
  3. glXCreateContextAttribsARB “appears” to create a 3.0 context,

however, on the GeForce 7, glXIsDirect returns “False”, and a glXMakeCurrent triggers a BadAlloc Xlib error, killing the app.

So my question is: what is the correct way to allocate a 3.x context on a GeForce 8+ and a 2.x context on a GeForce 7-, without resorting to GL_RENDERER or GL_VERSION parsing hackery? It’s coded to fall-back to the old context create if the 3.x context create fails. But it’s not returning failure. It’s dumping the app with an Xlib error.

The source code is below, but first, the GeForce 8 output:


...
Creating dummy old-style context
GL_VERSION = 3.2.0 NVIDIA 190.18.03
Deleting dummy old-style context
Creating context
Created GL 3.0 context
Verifying that context is direct
Making context current

and the broken output on GeForce 7:


Creating dummy old-style context
GL_VERSION = 2.1.2 NVIDIA 190.18.03
Deleting dummy old-style context
Creating context
Created GL 3.0 context
Verifying that context is direct
WARNING: Indirect GLX rendering context obtained
Making context current
X Error of failed request:  BadAlloc (insufficient resources for operation)
  Major opcode of failed request:  144 (GLX)
  Minor opcode of failed request:  34 ()
  Serial number of failed request:  38
  Current serial number in output stream:  39
libxcb: WARNING! Program tries to lock an already locked connection,
        which indicates a programming error.
        There will be no further warnings about this issue.

and finally, here’s the full program. Compile with:

g++ -o gl3_geforce7_broke gl3_geforce7_broke.cxx -lGL -lX11


#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#define GL_GLEXT_PROTOTYPES 1
#define GLX_GLXEXT_PROTOTYPES 1
#include <X11/Xlib.h>
#include <X11/Xutil.h>
#include <GL/gl.h>
#include <GL/glx.h>

#define GLX_CONTEXT_MAJOR_VERSION_ARB       0x2091
#define GLX_CONTEXT_MINOR_VERSION_ARB       0x2092
typedef GLXContext (*glXCreateContextAttribsARBProc)(Display*, GLXFBConfig, GLXContext, Bool, const int*);

int main (int argc, char ** argv)
{
  Display *display = XOpenDisplay(0);

  if ( !display )
  {
    printf( "Failed to open X display
" );
    exit(1);
  }

  // Query/print GLX version
  int major, minor;

  if ( !glXQueryVersion( display, &major, &minor ) )
  {
    printf( "glXQueryVersion failed
" );
    exit(1);
  }
  printf( "GLX version = %d.%d
", major, minor );

  if ( major < 1 || major == 1 && minor < 3 )
  {
    printf( "GLX version is too old; must be > 1.3
" );
    exit(1);
  }

  // Print GLX extensions
  //const char *extensions = glXQueryExtensionsString( display, 
  //                                                DefaultScreen( display ) );
  //printf( "%s
", extensions );

  // Get a matching FB config
  static int visual_attribs[] =
    {
      GLX_X_RENDERABLE    , True,
      GLX_DRAWABLE_TYPE   , GLX_WINDOW_BIT,
      GLX_RENDER_TYPE     , GLX_RGBA_BIT,
      GLX_X_VISUAL_TYPE   , GLX_TRUE_COLOR,
      GLX_RED_SIZE        , 8,
      GLX_GREEN_SIZE      , 8,
      GLX_BLUE_SIZE       , 8,
      GLX_ALPHA_SIZE      , 8,
      GLX_DEPTH_SIZE      , 24,
      GLX_STENCIL_SIZE    , 8,
      GLX_DOUBLEBUFFER    , True,
      //GLX_SAMPLE_BUFFERS  , 1,
      //GLX_SAMPLES         , 4,
      None
    };

  printf( "Getting framebuffer configs
" );
  int fbcount;
  GLXFBConfig *fbc = glXChooseFBConfig( display, DefaultScreen( display ), 
                                        visual_attribs, &fbcount );
  if ( !fbc )
  {
    printf( "Failed to retrieve a framebuffer config
" );
    exit(1);
  }
  printf( "Found %d matching FB configs.
", fbcount );

  // Pick the FB config/visual with the most samples per pixel
  printf( "Getting XVisualInfos
" );
  int best_fbc = -1, worst_fbc = -1, best_num_samp = -1, worst_num_samp = 999;

  for ( int i = 0; i < fbcount; i++ )
  {
    XVisualInfo *vi = glXGetVisualFromFBConfig( display, fbc[i] );
    if ( vi )
    {
      int samp_buf, samples;
      glXGetFBConfigAttrib( display, fbc[i], GLX_SAMPLE_BUFFERS, &samp_buf );
      glXGetFBConfigAttrib( display, fbc[i], GLX_SAMPLES       , &samples  );
      
      printf( "  Matching fbconfig %d, visual ID 0x%2x: SAMPLE_BUFFERS = %d,"
              " SAMPLES = %d
", 
              i, vi -> visualid, samp_buf, samples );

      if ( best_fbc < 0 || samp_buf && samples > best_num_samp )
        best_fbc = i, best_num_samp = samples;
      if ( worst_fbc < 0 || !samp_buf || samples < worst_num_samp )
        worst_fbc = i, worst_num_samp = samples;
    }
    XFree( vi );
  }

  // Get a visual
  int fbc_id = best_fbc;
  //int fbc_id = worst_fbc;

  XVisualInfo *vi = glXGetVisualFromFBConfig( display, fbc[ fbc_id ]  );
  printf( "Chosen visual ID = 0x%x
", vi->visualid );

  printf( "Creating colormap
" );
  XSetWindowAttributes swa;
  swa.colormap = XCreateColormap( display, RootWindow( display, vi->screen ), 
                                  vi->visual, AllocNone );
  swa.background_pixmap = None ;
  swa.border_pixel      = 0;
  swa.event_mask        = StructureNotifyMask;

  printf( "Creating window
" );
  Window win = XCreateWindow( display, RootWindow( display, vi->screen ), 
                              0, 0, 100, 100, 0, vi->depth, InputOutput, 
                              vi->visual, 
                              CWBorderPixel|CWColormap|CWEventMask, &swa );
  if ( !win )
  {
    printf( "Failed to create window.
" );
    exit(1);
  }

  XStoreName( display, win, "GL 3.0 Window");

  printf( "Mapping window
" );
  XMapWindow( display, win );

  // See if GL driver supports glXCreateContextAttribsARB()
  //   Create an old-style GLX context first, to get the correct function ptr.
  glXCreateContextAttribsARBProc glXCreateContextAttribsARB = 0;

  printf( "Creating dummy old-style context
" );

  GLXContext ctx_old = glXCreateContext( display, vi, 0, True );

  glXMakeCurrent( display, win, ctx_old );

  printf( "GL_VERSION = %s
", glGetString( GL_VERSION ) );

  glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc)
           glXGetProcAddress( (const GLubyte *) "glXCreateContextAttribsARB" );

  GLXContext ctx = 0;

  // If it doesn't, just use the old-style 2.x GLX context
  if ( !glXCreateContextAttribsARB )
  {
    printf( "glXCreateContextAttribsARB() not found"
            " ... using old-style GLX context
" );
    ctx = ctx_old;
  }

  // If it "does", try to get a GL 3.0 context!
  else
  {
    printf( "Deleting dummy old-style context
" );

    glXMakeCurrent( display, None, 0 );
    glXDestroyContext( display, ctx_old );

    static int context_attribs[] =
      {
        GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
        GLX_CONTEXT_MINOR_VERSION_ARB, 0,
        //GLX_CONTEXT_FLAGS_ARB        , GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
        None
      };

    printf( "Creating context
" );
    ctx = glXCreateContextAttribsARB( display, fbc[ fbc_id ], 0, 
                                      True, context_attribs );
    if ( ctx )
      printf( "Created GL 3.0 context
" );
    else
    {
      // Couldn't create GL 3.0 context.  Fall back to old-style 2.x context.
      printf( "Failed to create GL 3.0 context"
              " ... using old-style GLX context
" );
      ctx = glXCreateContext( display, vi, 0, True );
    }
  }

  XFree( fbc );

  // Verifying that context is a direct context
  printf( "Verifying that context is direct
" );
  if ( ! glXIsDirect ( display, ctx ) )
  {
    printf( "WARNING: Indirect GLX rendering context obtained
" );
    //exit(1);
  }

  printf( "Making context current
" );
  glXMakeCurrent( display, win, ctx );

  glClearColor ( 0, 0.5, 1, 1 );
  glClear ( GL_COLOR_BUFFER_BIT );
  glXSwapBuffers ( display, win );

  sleep( 1 );

  glClearColor ( 1, 0.5, 0, 1 );
  glClear ( GL_COLOR_BUFFER_BIT );
  glXSwapBuffers ( display, win );

  sleep( 1 );

  ctx = glXGetCurrentContext(  );
  glXMakeCurrent( display, 0, 0 );
  glXDestroyContext( display, ctx );
}

It is all spelled out in the WGL/GLX_ARB_context_creation spec. See http://www.opengl.org/registry/specs/ARB/wgl_create_context.txt

To quote from the spec:

The default values for WGL_CONTEXT_MAJOR_VERSION_ARB and WGL_CONTEXT_MINOR_VERSION_ARB are 1 and 0 respectively. In this case, implementations will typically return the most recent version of OpenGL they support which is backwards compatible with OpenGL 1.0 (e.g. 3.0, 3.1 + GL_ARB_compatibility, or 3.2 compatibility profile)

Barthold
(with my ARB hat on)

Yes, that does not look right. Asking for a 3.0 context on a Geforce 7 should fail. The create context spec says

“On failure glXCreateContextAttribsARB returns NULL and generates an X error with extended error information.”

Thanks for the source code. We’ll take a look asap.

Barthold
(with my NVIDIA hat on)

Thanks, Barthold.

Also just hit a similar (probably the same) bug with another user, this one on G80+ hardware (3.x capable).

  • Requesting a 3.1 context from 3.0- (but not 3.1-) capable drivers on 3.x-capable hardware (G80+) (e.g. returns a bogus non-null context pointer that results in an X error when bound.

This similar to the previously-reported problem where:

  • Requesting a 3.x context from 3.x-capable drivers on 2.1-only-capable hardware (pre-G80) returns bogus non-null context pointer that results in an X error when bound…

So it seems the general bug is: On an OpenGL 3.x-capable driver, if you request a GL context with a GL version greater than the version the driver supports on this hardware, then it returns a bogus context that terminates the app with an X error rather than returning NULL pointer for the context.

Implemented an ugly GL_VERSION check hack to work-around this. Now never asks for something greater than this.

Dark Photon,

First of all, we fixed this (hopefully) in the new drivers we just posted:

http://developer.nvidia.com/object/opengl_3_driver.html

glXCreateContextAttribsARB() should return a NULL pointer now if it cannot create a context.

More details:

Up to now, GLX context creation was implemented by NVIDIA as an asynchronous X request (i.e., the X request is sent to the X server, and the client implementation returns to the application before the X server processes the request). If creation of the context failed, an X error was generated and making current to it would fail.

A possible better workaround would be using XSetErrorHandler(3) to register for the context creation X error, and check for an error, as follows:

static Bool errorOccurred = False;

static int ErrorHandler(Display *dpy, XErrorEvent *ev) {
    // Ignore the error, but make note of it.
    errorOccurred = True;

    return 0;
}

int main(int argc, char *argv[])
{
    Display *dpy;
    GLXFBConfig *fbConfigs;
    GLXContext ctx;
    int (*oldHandler)(Display *, XErrorEvent *);

    // Skipping not relevant code

    errorOccurred = False;
    oldHandler = XSetErrorHandler(&ErrorHandler);

    if (!oldHandler) {
        XFree(fbConfigs);
        printf("Failed to install X error event handler");
        return -1;
    }

    ctx = glXCreateContextAttribsARB(dpy,
                                     fbConfigs[0],
                                     NULL,
                                     True,
                                     contextAttribs);

    XSync(dpy, False);
    XSetErrorHandler(oldHandler);

    XFree(fbConfigs);

    if (!ctx || errorOccurred) {
        printf("Context creation failed");
        return -1;
    }

}

Barthold
(with my NVIDIA hat on)

Thanks, Barthold. Strangely, on a GeForce 7900 GTX, I still get an X error when I request glXCreateContextAttribs to create a 3.0 or 3.1 context on the 190.18.04 driver:

X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 144 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 44
Current serial number in output stream: 45

A 2.1 context created via glXCreateContextAttribs on a 7900 works well as you’d expect.

Up to now, GLX context creation was implemented by NVIDIA as an asynchronous X request

Interesting. Thanks for the background.

> Implemented an ugly GL_VERSION check hack to work-around this.

A possible better workaround would be using XSetErrorHandler(3) to register for the context creation X error, and check for an error, as follows

Looks like I may need to do something like this for now anyway.

Am I correct in assuming that the application’s X and GLX state is not corrupted in any way if I bang on glXCreateContextAttribs a few times until it yields no X error after XSync, and then go on with the resulting GLX context? If so, I’ll definitely code-up that technique. More robust, and that way I don’t need to tell users and developers to avoid early 3.x drivers.

> Strangely, on a GeForce 7900 GTX, I still get an X error when I request glXCreateContextAttribs to create a 3.0 or 3.1 context

You should still expect an X error. That is the behavior defined by the spec. However, in the latest driver glXCreateContextAttribsARB should now return NULL as well, also as defined by the spec. Are you getting NULL returned?

Yes (just popped in a GeForce 7 and checked).

It’s unfortunate the extension is written as it is, since the driver has to take an async error and force it sync, but then rethrow the X error to the app, which it must catch async and ignore, to even use the NULL return of the context create call. A NULL return alone would have been sufficient.

At any rate, thanks for all the help! Our app’s coded to treat either NULL or the X error as failure, and repeatedly backs off with a lower version request.

FYI, the new drivers we just posted completes the implementation of GLX context creation. Our implementation of glXCreateContextAttribsARB as of 190.18.05 does now also support GLX_CONTEXT_PROFILE_MASK_ARB as an attribute value.

http://developer.nvidia.com/object/opengl_3_driver.html

Regards,
Barthold
(with my NVIDIA hat on)