Display lists and threads

If you want to create a DL in background, just create two contexts with shared lists in two different threads. You can use one thread to draw to DL and another one to draw to the window.

Hi, what I have been trying is exactly this. But somehow my whole application crashes with “can’t write to memory address”. I am creating a display list in a thread that uses a second rendering context.

However, the main thread that periodically draws the list seems to crash. It reads the list number and gets 0 until the thread is finished. After having finished, the thread sets the new list number to be accessable for main. But somewhere in the code it crashes (there’s no exception thrown so i cannot tell you where exactly).

I’d really appreciate your help!

Is there may be a tutorial that describes how to use one thread to create display lists and another one only to draw/call the lists?

Ahh, my suggestion :slight_smile:

Are you sure that the DL is really ready when you use it in your main thread? Are you calling Finish to assure it?

Also, keep in mind that my suggestion was very theoretical in it’s nature, because no one would do something weird like it :slight_smile: Well, I wouldn’t for sure. Just stick to VBOs.

Display lists are associated with the context that was current when it was created. You can’t access the display list from another context because it is not in that address space. Some windowing toolkits (Motif, Qt, etc.) will let you create multiple contexts that share the same address space. This allows you to share display lists, textures, etc. between the contexts. The quote above says to share the lists, but you have to enable that behavior before calling those lists. Typically, this is done through the constructor for the context/widget. In Motif/Qt I believe there is a parameter called “share_widget”. Pass context1 as the share_widget for context2. Now they can share display lists.

As Zengar said, this seems weird. Can you explain more what you’re really doing ? Explain all the rendering pipe in whole: how you create/destroy your list, how the rendering thread accesses and renders would really help.


Ok what I am doing is generating a lot of 3D objects at runtime. Each one gets a new display list associated.

In the beginning there was only the main thread that sequentially computed the coords for the glVertex3f-calls and put into a display list, then rendering all of the objects computed so far.

On slower machines this method is very nasty for the time a new object is computed. So I thought of generating those display lists in another thread. Each thread should generate only one display list and give its ID to the main thread that renders continously all of the lists via its IDs. (I don’t use multi core CPUs so far.)

Are VBOs a better way to gain better performance? What would you suggest? Aren’t VBOs much more complex to learn than display lists?

Thanks for your Beteiligung.

If you care: I am using the Tao Framework together with C# to create my OpenGL apps.

VBOs are very similar to vertex arrays, only that you store your model data direct in the video memory. Performance-wise they should be on-pair with display lists for rendering and much faster for setup. In the end, you will get much better performance with VBOs because you will spare the driver time it would otherwise need to create and optimize your display lists. Display lists were not designed to work in scenarios you have, they should rather be created before the main rendering starts. You could say that VBOs are much more complex to learn, but it would hardly take an hour or two of your time.

And to your problem, are you sure to share the lists between all contexts you create? Are all of the contexts valid and bound to a window? And once more, do you ensure the command completion using glFinish? OpenGL queues commands, so just calling EndList soes not ensures that the list was created…

Zengar said all…

Use glFinish in order to know that all DLs are all well created. Also, ensure that all old DLs are all destroyed, and well destroyed. From what you said it might be the DL creation that fails but that also can be the fact that your Graphic Card lacks memory because you don’t erase the old DLs (the latter is just a guess since I have never tried such a thing).

I don’t know under Windows, but under Linux, we need to set up the context with the shared one. Maybe there’s something like this under this OS…

To end, I’d like to say that display lists are really fast, almost on nvidia cards and for static stuff. But maybe their creation can be slower than VBOs since they are optimized by the driver.

Wir sind hier für das :wink:

Hmm, first of all thank you very much for your participation.

I have inserted glFinish() right after glEndList() in the thread. When I am in debug mode I see that the main application really calls glCallList(ID) using the newly generated display list number. However, no objects on the screen. The place where the objects should appear is empty. In fact, all the other objects that were drawn in the main application code are shown.

May be I forgot some more commands:

  1. Do I have to switch the contexts belonging to the specific position in my code? (switching to the main context in main application code (renderScene()) AND switching to the second context in the thread code generateDisplayList())

1.1) If so: I’d wonder how OpenGL separates the calls between makeCurrent() and releaseCurrent() which would be called by both code parts simultanously (main and thread).

  1. Do I have to call glFinish() in the main application code, too? If so, where in the code? At the end?

  2. Is it ok to have the main application code call glClear() on each frame? (I think this should be okay, because the thread is not drawing but only generating the display list.)

@jide: OpenGL is platform independent, so are you sure the creation of contexts differs on different OS? I did not do anything special to set up shared display lists.

@Zengar: VBOs seem to be some technology I don’t want to use, because I want the code to be runnable on slow/ancient machines, too :wink: (however, fast enough to use threads). I dont know how to share the display lists. Unitl now I thought providing the display list number should be enough (glCallList(numberProvidedByThread)).
The render context and pixel format as well as device context are set up properly (I mean it returns TRUE for each call.)

@all: Here is the code that creates the second context (The first one is created by the framework I am using). May be you find some odd lines. I copied some example code to set up the pixel format.

            dc = User.GetDC(form.Handle);

            bool dcsuccess = (dc != IntPtr.Zero);

            // pfd is set to express what we want
            pfd.nSize = (short)Marshal.SizeOf(pfd);     // Size Of This Pixel Format Descriptor
            pfd.nVersion = 1;                           // Version Number
            pfd.dwFlags = Gdi.PFD_DRAW_TO_WINDOW |      // Format Must Support Window
                Gdi.PFD_SUPPORT_OPENGL |                // Format Must Support OpenGL
                Gdi.PFD_DOUBLEBUFFER;                   // Format Must Support Double Buffering
            pfd.iPixelType = (byte)Gdi.PFD_TYPE_RGBA;   // Request An RGBA Format
            pfd.cColorBits = (byte)24;                // Select Our Color Depth
            pfd.cRedBits = 0;                           // Color Bits Ignored
            pfd.cRedShift = 0;
            pfd.cGreenBits = 0;
            pfd.cGreenShift = 0;
            pfd.cBlueBits = 0;
            pfd.cBlueShift = 0;
            pfd.cAlphaBits = 0;                         // No Alpha Buffer
            pfd.cAlphaShift = 0;                        // Shift Bit Ignored
            pfd.cAccumBits = 0;                         // No Accumulation Buffer
            pfd.cAccumRedBits = 0;                      // Accumulation Bits Ignored
            pfd.cAccumGreenBits = 0;
            pfd.cAccumBlueBits = 0;
            pfd.cAccumAlphaBits = 0;
            pfd.cDepthBits = 16;                        // 16Bit Z-Buffer (Depth Buffer)
            pfd.cStencilBits = 0;                       // No Stencil Buffer
            pfd.cAuxBuffers = 0;                        // No Auxiliary Buffer
            pfd.iLayerType = (byte)Gdi.PFD_MAIN_PLANE;  // Main Drawing Layer
            pfd.bReserved = 0;                          // Reserved
            pfd.dwLayerMask = 0;                        // Layer Masks Ignored
            pfd.dwVisibleMask = 0;
            pfd.dwDamageMask = 0;

            int pixelFormat = Gdi.ChoosePixelFormat(dc, ref pfd);

            bool pixSuccess = Gdi.SetPixelFormat(dc, pixelFormat, ref pfd);

            secondContext = Wgl.wglCreateContext(dc);
            bool mcontextsuccess = (secondContext != IntPtr.Zero);

Thanks in advance :rolleyes:

It still sounds to me like you haven’t enabled context sharing between the widgets. You need to call wglShareLists:


Originally posted by Rapthor:
OpenGL is platform independent, so are you sure the creation of contexts differs on different OS? I did not do anything special to set up shared display lists.

You said it yourself, you are not sharing lists :slight_smile: This is disabled per-default. OpenGL itself is platform-independent, but the context subsystem is not. As jtipton said, use wglShareLists

And depending on your application, if you don’t wont to use VBOs maybe you should try using vertex arrays? As the semantics of VBO and vertex arrays is almost the same, you could check the aviability of this extension and use it dinamically without much burden on your code. VBO support is pretty popular now (is exists even on cards like RIVA TNT), so you will hardly find a card that does not support it…

This sounds too good :smiley:

However, I tried to use wglShareLists(rc1, rc2) before calling any OpenGL code. rc1 and rc2 are both valid. What I got is a black screen standing still. No exception! When resizing the window, I got some strange colours changing with the size of the window. At some resizing point I even saw the rendered scene (only the first frame). But once again the place where the thread computed object should be is empty.

I called makeCurrent(…threadContext…) in the thread’s method before creating the display list. Then glFinish() after glEndList(). And finally a makeCurrent(null, null).

The same for the main code: makeCurrent(…mainContext…) before rendering the whole scene. Then tried with and without glFinish() at the end. Finally makeCurrent(null, null).

What else could be a problem?

I will think about VBOs in that manner to have some dynamical code.

GL is plateforme independant, but not the underlying windowing libraries.

pfd.cDepthBits = 16;

AFAIK all depth buffers on current cards are 24 bits sized.

Also under Linux I don’t have to create a full display setup for the second thread.

Finally both your contexts need to be active on their running thread.

After looking over the code several times, I guess I have some basic problem with my thread handling. May be I searched for failures at the wrong places. Not OpenGL is my problem, but threading in general.

I will report what I found out.

Yeah, sync is the rule main word here.