Help: app crashes with GLEW but not GLEE

I have a 3D graphics engine/server application I was developing until about 1 year ago. When I left off work on the application, I had equivalent linux and windows versions, but also I had another windows version with more features added.

Recently I started trying to develop them again and built a new 64-bit ubuntu 10.04 system with codeblocks 10.05 to continue work. The windows system is still the same winxp64 system.

The new version on windows was based upon GLEW because GLEE had fallen behind and I needed some new features not supported then by GLEE.

When I tried to make the most up-to-date application compile on linux, it gets a segment violation when calling the following function:

glxfbconfig = glXChooseFBConfig (xdisplay, xscreen, (int*)&glxfbconfigattributes000, &glxfbconfigelements);

When I look at the assembly language where the segment violation happens, the instruction is

call edx

but register edx contains 0x00000000

Several lines up, register edx is set as follows:

mov edx, DWORD PTR ds:0x820a0b8

I spent a long time trying to figure out what I might have changed in the code that might have caused this. Eventually I did what I should have done from the start — tested the previously working version (the compatible version that ran correctly on both linux and windows).

Well, it still runs on both linux and windows.

Then I changed the project in only one way — switch from GLEE to GLEW. I can do this because this version was just before I switched to GLEW to gain access to more advanced features.

The result is — the same segment violation at the exact same place for the exact same reason (register edx == zero).

All I do to switch over is the following:

#1: Change one define in the IDE from “GLEE” to “GLEW”.
#2: Remove file “glee.c” from the project.
#3: Add file “glew.c” to the project.

That’s all. Step #1 causes the files in the application to:

#include <GL/glew.h>
#include <GL/glxew.h>  // or <GL/wglew.h> on windows

instead of:

#include <GL/glee.h>

I consider this to pretty much definitely narrow down the problem to GLEW (or something about the way I include the GLEW files).

Note that I include the GLEW files in my application, not link to the libraries they provide as an alternative.

Does anyone know what the problem is? Has anyone seen a problem like this?

Is anyone up to debugging this? Note that the application runs on my windows system (with GLEE or GLEW). The only difference on windows is I develop/debug with VisualStudio2005 instead of Code::Blocks… but that shouldn’t matter (sez me). If anyone has 64-bit ubuntu 10.04 and is willing to debug this problem, I’ll ZIP up and send the code. I doubt the problem is the nvidia driver on my linux system (195.36.24) since GLEE versus GLEW shouldn’t have anything to do with that.

Ideas, anyone?

Did you call glewInit();?

Yes, my application calls glewInit() before it does anything else. Thanks for the idea, though. BTW, I don’t even think it is necessary to call glewInit() when glew.c is included and built into an application. But I do it anyway, just in case.

Oh, I also want to add one strange fact that I don’t understand. I found it necessary to add GL_VERSION_4_1 to my list of compiler defines. Otherwise all sorts of compile errors would happen. I’ve looked around for a simple statement of what needs to be done to specify an OpenGL version 3.2 (or 4.0 or 4.1 or later) plus “compatibility” (until I remove every deprecated feature I might have). I find bits and pieces, but nothing simple enough to comprehend.

Part of the problem is, the more detailed discussions intermix statements about all version of OpenGL plus all variations of “core”, “compatibility”, “forward” and so forth. They just overload my brain to the point I simply cannot figure out how to do the simple — give me version 3.20 (or 4.00 or 4.01 or later) and “compatibility”. Later I’ll change “compatibility” to “core”.

Anyone who can give me a simple list of what I need to have in my program to achieve v3.20 (or higher) and compatibility… would help me alot.

BTW, I notice the segment violation happens before I even get to call the function that specifies version and context! Go figure.

Forgetting to call glewInit just after window creation usually causes problems like that for me. Post your code and I can take a look at it on my Ubuntu 10.04 box.

I sent you a PM (personal message) with link to a ZIP file that contains the code.

Did you find the cause of this issue? I think I may have the same problem (not tested GLEE, though).

Using glew 1.5.6, Arch Linux 2.6.35 x86_64, Nvidia Quadro FX 570M / 256.53 drivers.

Segmentation fault when calling glXChooseFBConfig(), before any window creation, just after XOpenDisplay().

Found that you have to manually get some references ie see EarlyInitGLXfnPointers(). Here’s an example code that works …

// Block.cpp
// OpenGL SuperBible, Chapter 15
// Demonstrates an assortment of basic 3D concepts
// Program by Richard S. Wright Jr.
// modified to demonstrate just opening a GL context/window
// g++ main.cpp -lGL -lGLEW

#include <GL/glew.h>
#include <GL/glxew.h>

#include <cstdio>
#include <cmath>
#include <cstdlib> 

GLuint vao;
size_t VertexArrayCount;


void EarlyInitGLXfnPointers()
  glXCreateContextAttribsARB = (GLXContext(*)(Display* dpy, GLXFBConfig config, GLXContext share_context, Bool direct, const int *attrib_list))glXGetProcAddressARB((GLubyte*)"glXCreateContextAttribsARB");
  glXChooseFBConfig = (GLXFBConfig*(*)(Display *dpy, int screen, const int *attrib_list, int *nelements))glXGetProcAddressARB((GLubyte*)"glXChooseFBConfig");
  glXGetVisualFromFBConfig = (XVisualInfo*(*)(Display *dpy, GLXFBConfig config))glXGetProcAddressARB((GLubyte*)"glXGetVisualFromFBConfig");

typedef struct RenderContextRec
    GLXContext ctx;
    Display *dpy;
    Window win;
    int nWinWidth;
    int nWinHeight;
} RenderContext;

void CreateWindow(RenderContext *rcx)
    XSetWindowAttributes winAttribs;
    GLint winmask;
    GLint nMajorVer = 0;
    GLint nMinorVer = 0;
    XVisualInfo *visualInfo;
    GLXFBConfig *fbConfigs;
    int numConfigs = 0;
    static int fbAttribs[] = {
                    GLX_RENDER_TYPE,   GLX_RGBA_BIT,
                    GLX_X_RENDERABLE,  True,
                    GLX_DRAWABLE_TYPE, GLX_WINDOW_BIT,
                    GLX_DOUBLEBUFFER,  True,
                    GLX_RED_SIZE, 8,
                    GLX_BLUE_SIZE, 8,
                    GLX_GREEN_SIZE, 8,
                    0 };

    // Tell X we are going to use the display
    rcx->dpy = XOpenDisplay(NULL);

    // Get Version info
    glXQueryVersion(rcx->dpy, &nMajorVer, &nMinorVer);
    printf("Supported GLX version - %d.%d
", nMajorVer, nMinorVer);   

    if(nMajorVer == 1 && nMinorVer < 2)
        printf("ERROR: GLX 1.2 or greater is necessary

    // Get a new fb config that meets our attrib requirements
    fbConfigs = glXChooseFBConfig(rcx->dpy, DefaultScreen(rcx->dpy), fbAttribs, &numConfigs);
    visualInfo = glXGetVisualFromFBConfig(rcx->dpy, fbConfigs[0]);

    // Now create an X window
    winAttribs.event_mask = ExposureMask | VisibilityChangeMask | 
                            KeyPressMask | PointerMotionMask    |
                            StructureNotifyMask ;

    winAttribs.border_pixel = 0;
    winAttribs.bit_gravity = StaticGravity;
    winAttribs.colormap = XCreateColormap(rcx->dpy, 
                                          RootWindow(rcx->dpy, visualInfo->screen), 
                                          visualInfo->visual, AllocNone);
    winmask = CWBorderPixel | CWBitGravity | CWEventMask| CWColormap;

    rcx->win = XCreateWindow(rcx->dpy, DefaultRootWindow(rcx->dpy), 20, 20,
                 rcx->nWinWidth, rcx->nWinHeight, 0, 
                             visualInfo->depth, InputOutput,
                 visualInfo->visual, winmask, &winAttribs);

    XMapWindow(rcx->dpy, rcx->win);

    // Also create a new GL context for rendering
    GLint attribs[] = {
      GLX_CONTEXT_PROFILE_MASK_ARB,GLX_CONTEXT_CORE_PROFILE_BIT_ARB,  // added, required to get above 3.1
      0 };
    rcx->ctx = glXCreateContextAttribsARB(rcx->dpy, fbConfigs[0], 0, True, attribs);
    glXMakeCurrent(rcx->dpy, rcx->win, rcx->ctx);

    GLenum err = glewInit();
    if (GLEW_OK != err)
        /* Problem: glewInit failed, something is seriously wrong. */
        fprintf(stderr, "Error: %s
", glewGetErrorString(err));

void SetupRC(RenderContext *rcx)
  const GLubyte * strVersion = glGetString(GL_VERSION);

  //Create shaders and shader program
  GLuint vshader(glCreateShader(GL_VERTEX_SHADER));
  GLuint fshader(glCreateShader(GL_FRAGMENT_SHADER));
  GLuint program(glCreateProgram());

  const GLchar *vshader_source[] = 
  "#version 150 core
  "in vec3 vert;
  "void main() {
  "  gl_Position=vec4(vert,1.);

  const GLchar *fshader_source[] = 
  "#version 150 core
  "out vec4 fragcolor;
  "void main() {
  "  fragcolor=vec4(0.0f,0.0f,1.0f,0.0f);



  //Get handles to shader uniforms
  //... none for this simple vert/frag shader

  //Datas destioned for video memory, can be local (and lost after bound to GPU!). 
  #define R 0.9
  GLfloat vertices[] = { // in vec3 vert;
    -R,  R, 0.0, // xyz 
    -R, -R, 0.0, 
     R,  R, 0.0,
     R, -R, 0.0
   VertexArrayCount=sizeof(vertices)/sizeof(GLfloat)/3; // 3 for {x y z}

  //Create geometry vertex array using Model definition

  //in vec3 vert;
  GLuint bon_vert; // buffer object name
  const GLint loc_vert(glGetAttribLocation(program,"vert"));

  glClearColor(0.0f, 0.0f, 0.0f, 1.0f );

void RenderScene(RenderContext *rcx)


    // Flush drawing commands
    glXSwapBuffers(rcx->dpy, rcx->win);    

void KeyPressFunc(unsigned char key)
    switch( key )
    default: exit(0); break;


void ChangeSize(int w, int h)
    glViewport(0, 0, w, h);

void Cleanup(RenderContext *rcx)
    // Unbind the context before deleting
    glXMakeCurrent(rcx->dpy, None, NULL);

    glXDestroyContext(rcx->dpy, rcx->ctx);
    rcx->ctx = NULL;

    XDestroyWindow(rcx->dpy, rcx->win);
    rcx->win = (Window)NULL;

    rcx->dpy = 0;

// Main entry point 
int main(int argc, char* argv[])

    // Setup X window and GLX context
    // Set initial window size
    RenderContext rcx;
    rcx.nWinWidth  = 800;
    rcx.nWinHeight = 600;
    ChangeSize(rcx.nWinWidth, rcx.nWinHeight);

    // Draw the first frame before checking for messages

    // Execute loop the whole time the app runs
    Bool bWinMapped = False;
        XEvent newEvent;
        XWindowAttributes winData;

        // Watch for new X events
        XNextEvent(rcx.dpy, &newEvent);

        case UnmapNotify:
            bWinMapped = False;
        case MapNotify :
            bWinMapped = True;
        case ConfigureNotify:
            XGetWindowAttributes(rcx.dpy,, &winData);
            rcx.nWinHeight = winData.height;
            rcx.nWinWidth = winData.width;
            ChangeSize(rcx.nWinWidth, rcx.nWinHeight);
        case KeyPress:
        case DestroyNotify:


    return 0;

note to get a 4.0 context on nvidia 465 in linux had to change code to

    // Also create a new GL context for rendering
    GLint attribs[] = {
      0 };

Thanks alot! I used the context creation from wiki, and thus needed glXGetFBConfigAttrib as well:

    glXGetFBConfigAttrib = (int(*)(Display *dpy, GLXFBConfig config, int attribute, int *value))glXGetProcAddressARB((GLubyte*)"glXGetFBConfigAttrib");

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.