ATI , Linux 2.5.1 driver, PBuffer

ATI’s 2.5.1 driver doesn’t support PBuffers. The Catalyst windows drivers support them.

Does anyone have any information on when PBuffer support will be implemented in the Linux drivers.

Granted it’s currently GLX 1.2, but the extension would be nice, as would GLX 1.3.


[This message has been edited by Ribeye (edited 05-15-2003).]

Originally posted by Ribeye:
[b]ATI’s 2.5.1 driver doesn’t support PBuffers. The Catalyst windows drivers support them.

Does anyone have any information on when PBuffer support will be implemented in the Linux drivers.

Granted it’s currently GLX 1.2, but the extension would be nice, as would GLX 1.3.


[This message has been edited by Ribeye (edited 05-15-2003).][/b]

There are 2.9.6 drivers for FireGL X|Z1, which work fine for r300 (tested with 9700pro). You can check whether they have a pbuffer.


Thanks for the link, I tried the 2.9.6 driver and as you stated it works on the Radeon 9700Pro, It suffers from the same lack of Pbuffer support and GLX 1.3 however.

Anyone have an idea when ATI is going to get its act together and provide a Linux driver on par with its Windows driver ?

They should also take a serious look at that website. It’s in a complete shambles. The linux drivers are all over the place.

Anyone from ATI out there ?


I agree that ATI’s Linux support needs some serious work. I wrote about my experiences in the thread:

I installed an ATI card purely because I have another machine with an nVidia card and I wanted to test my work on different systems. After much pain I think its best to give this card to someone else and return to nVidia. I am disappointed by this because I like the idea of hetereogeneous environments.

I first installed my Radeon under RedHat8.0 with whatever its default version of XFree86. I found that some gl extensions caused ‘illegal instructions’ when called. I tried using Mandrake 9.1 and the extensions worked (after much messing about with a different version of XFree86), but PBuffers weren’t supported adequately. (I wrote a small test program to demonstrate my belief that PBuffers and windows were sharing the same rendering resource, effectively making PBuffers useless for my work.) I now have XFree 4.3, only to find that ATI drivers currently stop their support for 4.2.

What’s going on? How can nVIdia get its Linux support so right, but ATI get it so infuriatingly wrong? How are ATI drivers implemented? Surely there dont’ need to be different versions of the drivers dependant on what version of XFree you’re using. I can understand that Linux’s driver model requires kernel mods (the approach taken by nVidia) and I can understand different drivers for MAJOR revisions, but why the different drivers from .1 to .2 and to .3 for ATI cards?

Why does ATI write that they only “emulate” pixelbuffers because SGI haven’t released the source code for GLX beyond 1.2? This doesn’t seem to have stopped nvidia from having a proper 1.3 GLX interface. If pbuffers have to be “emulated” (whatever the hell that means—either you’re implementing the pixelbuffer’s interface or you’re not) then why not implement them completely and correctly?

My experience with ATI is very painful. It took me almost no time to get hardware support for nvidia cards. I’ve reinstalled Mandrake (for other reasons) and have XFree4.3 installed from the CD, and even though XFree claim they now support Radeon cards out-of-the-box, my system is still using Mesa GL rendering. I’ve given up caring about ATI cards.

I use OpenGL for scientific work: I render images to the pbuffer that I never see but use it to collect data. From my experiences with ATIs cards I have very little faith in the quality of the data I get back. If my system has an erratic cost-function on an nvidia card then I am confident that the problem lies with my cost-function; under ATI, however, I just don’t know if the pixelbuffer resource is being used by the card correctly. Its one thing when you’re playing a game and can see the card screw up (and it doesn’t really matter if you can visually see that the pixelbuffer resource was lost–its only a game), but its another when its all done behind the scenes and the results are analysed through graphs.

I am disappointed that support is so inadequate.


Nice to hear that I’m not the only one that is suffering. I guess misery loves company. What I find sad however is that ATI seems to be silent on this issue. I can understand that they are in a bit of a shambles as far as their drivers are concerned. Their Pbuffers under Windows weren’t working properly 6 months ago (Catalyst 3.1 if I recall, I haven’t tested them since, but that’s just ridiculous). Their driver support for mobile chipsets is just as pitiful, with their pass the buck attitude. IBM’s Thinkpad drivers are 6 months or more behind.

I suppose if you’re playing games for a living ATI will try to cater to your needs …just by looking at the driver release notes you can see that almost every fix is for one video game or another.

If you are serious about crossplatform development (Linux-Windows) there is still only one show in town , and that’s NVIDIA.

Anyone from ATI have any comments. I realize you guys can’t fix all of these problems immediately, but you could try to present a clear vision of where your Software/Hardware support on Linux is going.


Just wanted to add my two cents. Although the support is quite flaky, I have Pbuffers working on Radeon 9700, FireGL Z1/X1. You can take a look at the fglxgears source to see how to setup Pbuffers on ATI hardware. In the later drivers floating point Pbuffer support even seems to work.

– Niels


are you SURE that pixelbuffers works correctly? I have the same glgears demo that works under my radeon9500, but if you look at it carefully I think you’d see that it far from complete. Sure, it compiles and runs and visually seems to do the trick (well, it did on my system), but I noted the list of inconsistencies by modifiying their fglxgears.cpp demo:

  1. the pixelbuffer and window resources map to the same part of the frame-buffer
  2. context switch calls always return false
  3. querying the drawable returns nonsensical results

I’ll explain what these mean:

  1. Pixel Buffers are meant to be separate, off-screen rendering contexts. The demo code I wrote created a window and a pixelbuffer. I changed context to the window and set the clear colour to blue and cleared the screen, then changed the context to the pixelbuffer, set the clear colour to red and cleared the screen. I changed the context back to the window and printed out the RGB values of the framebuffer. I changed the context to the pixelbuffer and printed out the colours of ITS framebuffer. I tested this code on an nvidia card and got the results i wanted: two separate framebuffers were created and their values were blue and red for the window and pixelbuffer respectively. However, on the ATI card I got back red for BOTH the window and the pixelbuffer. My interpretation is that either the contexts aren’t being changed or that the pixelbuffer and window framebuffer resources are being shared (for some inexplicable reason). The gears context works fine because the gears are rendered to the pixelbuffer (/window) resource first, copied to texture and then rendered to the window buffer, thus there is never contention.

  2. to work out if the context was being changed, I added asserts to the glx change context calls in the fglrxgears demo. They always failed (ie. the change context was returning false), even though the program was supposedly running “properly”.

  3. querying the drawable dimensions returns nonsensical values. You should be able to call glXQueryDrawable(currdisplay, currdrawable, GLX_WIDTH, &w) to get the size of the current drawable (say, the pixelbuffer); but the driver returns insanely large values.

ALl these problems can be demonstrated by simple additions to ATI’s example program. Even though their program supposedly runs “correctly”, they don’t check that context switches (ie. glXMakeContextCurrent(display, pbuffer, pbuffer, context)) returns true:


The window/pixelbuffer resource contention can be demonstrated quite simply by modifying the render loop to write to alternatively write to both pixelbuffers and window buffers and then reading them back.

The query drawable code can be easily added after the pixelbuffer is created.

pixelbuffers may appear to work, but my assertion is that they are fundamentally flawed on ATI’s implementation.


Maybe I’m blind, but PBUFFERS are not supported. Please read the following carefully. If they are supported ATI is making them unusable. My only means of verifying the existance of features are the “gl*strings” I’m not going to just cross my fingers and hope for the best by starting to access extension that don’t exist.

I also took a look at the 2.9.12 drivers. I did not bother to install them however since they clearly state the following in the release notes

" PBuffer support: available only if you have an ATI FIRE GL graphics board
installed in your computer. No PBbuffer support for ATI Radeon and other

Of course maybe based upon ATI’s poor information on their website about their drivers, Can I trust them? Maybe I should just waste another hour and install them.

name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
client glx vendor string: SGI
client glx version string: 1.2
client glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context,
GLX_ARB_get_proc_address, GLX_ATI_pixel_format_float,
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: Radeon 9700 Pro Pentium III (SSE)
OpenGL version string: 1.3 (X4.1.0-2.9.6)
OpenGL extensions:
GL_ARB_multitexture, GL_EXT_texture_env_add, GL_EXT_compiled_vertex_array,
GL_S3_s3tc, GL_ARB_depth_texture, GL_ARB_fragment_program,
GL_ARB_multisample, GL_ARB_point_parameters, GL_ARB_shadow,
GL_ARB_shadow_ambient, GL_ARB_texture_border_clamp,
GL_ARB_texture_compression, GL_ARB_texture_cube_map,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3,
GL_ARB_texture_mirrored_repeat, GL_ARB_transpose_matrix,
GL_ARB_vertex_blend, GL_ARB_vertex_program, GL_ARB_window_pos,
GL_ATI_draw_buffers, GL_ATI_element_array, GL_ATI_envmap_bumpmap,
GL_ATI_fragment_shader, GL_ATI_map_object_buffer, GL_ATI_separate_stencil,
GL_ATI_texture_env_combine3, GL_ATI_texture_float,
GL_ATI_texture_mirror_once, GL_ATI_vertex_array_object,
GL_ATI_vertex_attrib_array_object, GL_ATI_vertex_streams,
GL_ATIX_texture_env_combine3, GL_ATIX_texture_env_route,
GL_ATIX_vertex_shader_output_point_size, GL_EXT_abgr, GL_EXT_bgra,
GL_EXT_blend_color, GL_EXT_blend_func_separate, GL_EXT_blend_minmax,
GL_EXT_blend_subtract, GL_EXT_clip_volume_hint,
GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal,
GL_EXT_polygon_offset, GL_EXT_secondary_color,
GL_EXT_separate_specular_color, GL_EXT_stencil_wrap,
GL_EXT_texgen_reflection, GL_EXT_texture3D,
GL_EXT_texture_compression_s3tc, GL_EXT_texture_cube_map,
GL_EXT_texture_edge_clamp, GL_EXT_texture_env_combine,
GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_lod_bias, GL_EXT_texture_object, GL_EXT_texture_rectangle,
GL_EXT_vertex_array, GL_EXT_vertex_shader, GL_HP_occlusion_test,
GL_NV_texgen_reflection, GL_NV_blend_square, GL_NV_occlusion_query,
GL_SGI_color_matrix, GL_SGI_texture_edge_clamp,
GL_SGIS_texture_border_clamp, GL_SGIS_texture_lod,
GL_SGIS_generate_mipmap, GL_SGIS_multitexture, GL_SUN_multi_draw_arrays
glu version: 1.1 Mesa 3.4.2
glu extensions:

visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat

0x23 24 tc 0 24 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow
0x24 24 tc 0 24 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow
0x25 24 tc 0 24 0 r y . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x26 24 tc 0 24 0 r . . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x27 24 tc 0 24 0 r y . 8 8 8 8 0 24 8 0 0 0 0 0 0 None
0x28 24 tc 0 24 0 r . . 8 8 8 8 0 24 8 0 0 0 0 0 0 None
0x29 24 tc 0 24 0 r y . 8 8 8 8 0 24 0 0 0 0 0 0 0 None
0x2a 24 tc 0 24 0 r . . 8 8 8 8 0 24 0 0 0 0 0 0 0 None
0x2b 24 dc 0 24 0 r y . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow
0x2c 24 dc 0 24 0 r . . 8 8 8 8 0 24 8 16 16 16 16 0 0 Slow
0x2d 24 dc 0 24 0 r y . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x2e 24 dc 0 24 0 r . . 8 8 8 8 0 24 0 16 16 16 16 0 0 Slow
0x2f 24 dc 0 24 0 r y . 8 8 8 8 0 24 8 0 0 0 0 0 0 None
0x30 24 dc 0 24 0 r . . 8 8 8 8 0 24 8 0 0 0 0 0 0 None
0x31 24 dc 0 24 0 r y . 8 8 8 8 0 24 0 0 0 0 0 0 0 None
0x32 24 dc 0 24 0 r . . 8 8 8 8 0 24 0 0 0 0 0 0 0 None

[This message has been edited by Ribeye (edited 05-22-2003).]


I take that back, I think. I installed the FireGL 2.9.6 drivers for my radeon 9500, and the fglx_gears demo seems to work. I have yet to add the patches I described before to see if they COMPLETELY work, but the display is definately noticably different from the 2.5.1 drivers I had before. Definately.


I just read the Readme for the 2.9.6 and 2.9.16 drivers and that both clearly state that PBuffer support does not work on Radeons.

However after making modifications to my code according to the README in the fgl_glxgears directory I have an application now that runs on my 9700Pro using PBuffers. There are a few issues that I need to resolve but they do work.

The problem is also that they are not exposed to the glxextionstring making the use of them a little messy.

ATI should really spend a couple of weeks and tidy up this mess. The website , the README’s and the GLX 1.2-1.3 stuff. It is in noones best interest that things are so confused.

Hope This helps


This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.