Creating OpenGL Context with SDL

I’m developing an OpenGL Game with SDL 1.2 creating a context (on Linux/Ubuntu 10.10; GPU: ATI Mobility Radeon HD 4650; GPU driver: radeon (open source); Laptop bought: end of 2009).

But I think SDL 1.2 can only create contexts up to OpenGL 2.1.
Because I think OpenGL 3.0+ is faster for my hardware (and an example prints out
Vendor : Advanced Micro Devices, Inc.
Renderer : Mesa DRI R600 (RV730 9480) 20090101 x86/MMX/SSE2 TCL DRI2
Version : 2.1 Mesa 7.9-devel
and I think Mesa is an OpenGL software implementation with slow hardware acceleration and so it’s probably not very fast - or is it ok that it shows Mesa as the OpenGL-Renderer on Linux?), I’d like to use the radeon-driver OpenGL-Implementation and also OpenGL-Contexts for versions > 2.1.

I found two possibilities (for creating OpenGL contexts > 2.1):
1.: Using SDL_GL_LoadLibrary(NULL) and SDL_GL_GetProcAddress; but in this case I would have to change my system and use function pointers;
I wasn’t able to print out Vendor, Renderer and Version there because const GLubyte* (APIENTRY*glGetString)(GLenum); was strangely “(nil)” (noticed with the help of "printf( "%p
“, f.glGetString );”)
2.: Use SDL 1.3 like in Tutorial1: Creating a Cross Platform OpenGL 3.2 Context in SDL (C / SDL) - OpenGL Wiki
but SDL 1.3 is not stable and maybe because of that there could be problems using it (and some functions could have changed and I don’t know if a documentation of SDL 1.3 exists).

I think a quite popular game (Secret Maryo Chronicles) uses SDL 1.2 without SDL_GL_LoadLibrary and it’s speed is also quite OK.
But because my game is a webcam game with motion detection every millisecond is important :slight_smile: and so - if OpenGL 3.x and the radeon Renderer are faster - I would prefer them.
And when I’ll want to compile the game on Windows maybe Windows only has an old gl.h and I don’t know if opengl.org has still a header for 2.1 (and not just gl3.h).

What do you suggest me doing?

By the way, I wanted to see why it uses Mesa 2.1 and not Radeon 2.1 and I saw (with glxinfo) that my system only has Mesa:

OpenGL renderer string: Mesa DRI R600 (RV730 9480) 20090101 x86/MMX/SSE2 TCL DRI2
OpenGL version string: 2.1 Mesa 7.9-devel
OpenGL shading language version string: 1.20

but actually, it should use radeon (xserver-xorg-video-radeon is installed).

I looked at /usr/lib/libGL.so and it’s a link to “mesa/libGL.so” and not to the radeon libGL.so. How can I solve this problem/why doesn’t the link lead to the radeon library?

Download and install official ATI drivers.

After installation you should see something like this:

$ fglrxinfo
display: :0.0 screen: 0
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: ATI Mobility Radeon HD 4500 Series
OpenGL version string: 3.3.10317 Compatibility Profile Context

xserver-xorg-video-radeon is just the 2D Xorg driver for rendering your window manager (X11). This can’t provide official/full 3D acceleration.

Mesa DRI R600 is not software, it is the open source driver for your card. Probably behind the official binary driver, still not as bad as no 3D acceleration.

I had already installed the ATI-fglrx driver, but I had to replace it with the open source driver because it caused a memory leak (when calling SDL_Flip). Now I don’t use SDL_Flip any more so this wouldn’t be a problem, but if I want to use it once again, it could cause the error again.
Maybe this problem is already solved (the problem was in ubuntu 10.04) or otherwise I have to open a bug report. But this is a lot of work and if the open source driver is fast enough for displaying and updating (glTexSubImage) a 32-Bit Webcam Texture (resolution: 1366x768), rotating some textures and some other small things (like GUI), nothing in 3D, and there would be enough speed left for OpenCV converting the Webcam Texture to greyscale and checking motion detection (when needed) with sparse optical flow, I’d prefer to keep the open source driver. Is the open source driver fast enough for that?

Hard to say, try and you will see.
As your use case is near the needs of the accelerated compositing managers, it is probably possible.

OK, I’ll try if it works.
Thanks for your answer.
For faster computers, would it be better to use SDL_GL_LoadLibrary or SDL 1.3 so that contexts for OpenGL > 2.1 can be created? Or is it all the same because (in case you don’t use features of OpenGL > 2.1) it’s not the OpenGL version the speed depends on but is it only the GPU driver?
If the speed also depends on the OpenGL version, should I use SDL 1.3 or SDL_GL_LoadLibrary?
If using OpenGL 2.1 is OK, where can I get the current versions of gl.h form? (I think Linux has up-to-date headers, but for Windows)

I have seen no documented speed advantage of going directly for GL 3.x/4.x, no need to mess with that unless you need a >2.1 feature that have no equivalent in a GL 2.1 extension.

The GL speed depend mostly on the driver and hardware.

Official headers : http://www.opengl.org/registry/#headers
Be sure to use both gl.h (core) and glext.h (extensions).

Thank you for your answer!
I can’t find a link to gl.h (only to glext.h, glxext.h and wglext.h) in http://www.opengl.org/registry/#headers.
So my question is: Where can I get this header file from?
On Windows, the OpenGL headers are certainly not up-to-date (so I have to download them), but on linux, I think the headers are OK (glext.h is the same as the opengl.org version and in gl.h there is: “Mesa 3-D graphics library * Version: 7.6”, so I think this is OK because I think the Mesa header is very close to the standard).
One last thing:
Do topics in this forum have to be marked as finished?

Indeed your are right, no gl.h here … ?
There is the “beta” gl3.h but not production-ready.
The gl.h is not updated since gl1.1 or 1.2 I guess.
No need to, because everything newer is in fact defined in glext.h

Thank you for your answer!
One last thing:
In http://www.opengl.org/wiki/Getting_started (OpenGL 2.0+ and extensions) is mentioned that, without function pointers, you can’t use functions of OpenGL 2.0+.
Is that the reason why SDL has a header named SDL_opengl.h which contains the function pointers for OpenGL up to 2.1 and internally loads the library dynamically?

SDL doesn’t internally loads the function pointer, you’ll have to do it by yourself.

Check: Game Programming Wiki - GPWiki

or use glew.

But why can SDL use OpenGL 2.0/2.1 functions when it doesn’t use function pointers internally?

OpenGL creation context is the same for all OpenGL version < 3.x.

Between, the link you provided in your first topic (namely Tutorial1: Creating a Cross Platform OpenGL 3.2 Context in SDL (C / SDL) - OpenGL Wiki) doesn’t say that you could use post GL 1.2 functions without getting the extensions.

I think you misunderstand a point.

OpenGL context will allow a window (to be simple) to make OpenGL calls and to render with OpenGL.
Since OpenGL 3, a new context creation model was needed for technical aspects. But before OpenGL 3, so until OpenGL 2.x, you can use the same context creation model. (You can even use the same creation model for having a compatibility profile context until OpenGL 3.3).

About SDL, SDL 1.2 won’t allow you to create new GL contexts but will allow you to use all OpenGL functions supported by your card (until GL 3.3 at least, I think GL 4.x doesn’t allow compatibility profile… [if someone here can tell about it]).
SDL 1.3 which is in development allows you to create the new context and to have full GL 3.x context.

But remind that if you want to use full GL 3.x, there has many GL things you’ll have to forget: direct mode, display lists, bitmaps… cf http://www.opengl.org/wiki/History_of_OpenGL#OpenGL_3.0_.282008.29

OK, thank you.
I don’t really need OpenGL 3.x so I won’t change my system.
Did I understand this correctly: I can also use OpenGL functions of version > 1.2 when I have the latest glext.h header (then I don’t need to load the library dynamically).

Yes, definately.

If you use SDL, I’m not sure if you need glext.h. You can also use glew.

Thank you very much for your answer!
Maybe, for my next project, I’ll use glew additonal to SDL so that I don’t have to check if an extension is available on this platform.
But for this project I don’t need any special extensions because the only thing OpenGL does in my game is displaying a webcam picture, rotating some textures and some other small things (but nothing you need extensions for).

There is a GL4.0 and 4.1 compatibility profile: http://www.opengl.org/registry/doc/glspec41.compatibility.20100725.pdf

Under ATI (and NVIDIA) creating a GL context using the old school way of making a GL context will give you the highest version of GL that the card can do as a compatibility profile, for your card that means GL4.1 compatibility profile. Such a context has all the stuff of a GL4.1 core profile and all the stuff from GL2.1 and before. I cannot imagine a situation where a core profile will run faster than a compatibility profile (NVIDIA actually says that a core profile forces the driver to do even more checks than a compatibility profile). The main thing a core profile does is that it stops you from using some crusty old API points from a long time ago (and it also has a few other bits removed: fixed function pipeline and all attribute data must be source from buffer objects and a few other tidbits). Though some bits of functionality that were removed maybe were not so crusty (display lists I am looking at you). With only one exception, a compatibility profile is a strict super set of a core profile (the exception is that in a compatibility profile a GLSL program must use attribute 0, where as for core this is not needed, the cause is essentially glBegin/glEnd compatibility). For each version of GL starting at 3.1 there is a spec for the core profile and a spec for the compatibility profile (strictly speaking for 3.1 the terms core and compatibility profile did not exist, so the two specs are with and without GL_ARB_compatibility).

Lastly, I strongly advise that you use the Catalyst drivers from AMD/ATI, they perform several times faster than the open source drivers and have much more functionality. Indeed, the MESA infrastructure (not just the drivers) still does not have all the jazz for GL3 even.

Thanks kRogue.