glCreateShaderObjectARB() and mesa problem

I’ve tried to run some shader samples on an old box with the GL extensions below. Notice how there is no GL_ARB_shader_objects advertised in the list. Consequently GLEW provides a NULL pointer to glCreateShaderObjectARB() and the samples crash, but games utilizing shaders (enemy-territory, q3, …) work just fine. Is there another way to get to glCreateShaderObjectARB()?

OpenGL vendor string: Tungsten Graphics, Inc.
OpenGL renderer string: Mesa DRI R200 (RV280 5964) 20090101 AGP 8x x86/MMX+/3DNow!+/SSE TCL
OpenGL version string: 1.3 Mesa 7.8.1
OpenGL extensions:
GL_ARB_draw_buffers, GL_ARB_imaging, GL_ARB_multisample,
GL_ARB_multitexture, GL_ARB_point_parameters, GL_ARB_point_sprite,
GL_ARB_texture_border_clamp, GL_ARB_texture_compression,
GL_ARB_texture_cube_map, GL_ARB_texture_env_add,
GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar,
GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat,
GL_ARB_texture_rectangle, GL_ARB_transpose_matrix,
GL_ARB_vertex_buffer_object, GL_ARB_vertex_program, GL_ARB_window_pos,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color,
GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate,
GL_EXT_blend_logic_op, GL_EXT_blend_minmax, GL_EXT_blend_subtract,
GL_EXT_compiled_vertex_array, GL_EXT_convolution, GL_EXT_copy_texture,
GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_histogram,
GL_EXT_multi_draw_arrays, GL_EXT_packed_depth_stencil,
GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_polygon_offset,
GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_separate_specular_color, GL_EXT_stencil_wrap, GL_EXT_subtexture,
GL_EXT_texture, GL_EXT_texture3D, GL_EXT_texture_cube_map,
GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3,
GL_EXT_texture_filter_anisotropic, GL_EXT_texture_lod_bias,
GL_EXT_texture_mirror_clamp, GL_EXT_texture_object,
GL_EXT_texture_rectangle, GL_EXT_vertex_array, GL_APPLE_packed_pixels,
GL_ATI_blend_equation_separate, GL_ATI_texture_env_combine3,
GL_ATI_texture_mirror_once, GL_ATI_fragment_shader,
GL_IBM_multimode_draw_arrays, GL_IBM_rasterpos_clip,
GL_IBM_texture_mirrored_repeat, GL_INGR_blend_func_separate,
GL_MESA_pack_invert, GL_MESA_ycbcr_texture, GL_MESA_window_pos,
GL_NV_blend_square, GL_NV_light_max_exponent, GL_NV_packed_depth_stencil,
GL_NV_texture_rectangle, GL_NV_texgen_reflection, GL_OES_read_format,
GL_SGI_color_matrix, GL_SGI_color_table, GL_SGIS_generate_mipmap,
GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp,
GL_SGIS_texture_lod, GL_SUN_multi_draw_arrays

games utilizing shaders (enemy-territory, q3, …)

Um, Quake 3 was released in 1999. The first shader-aware card came into existence in around 2002. And GLSL dates to around 2005.

So no, Quake 3 does not use shaders. And since Enemy Territory is a Q3-engine game, it also doesn’t use shaders.

OpenGL version string: 1.3 Mesa 7.8.1

GL 1.3? You either have some very old hardware or some very old drivers.

Why the numerous shader-related extensions in the list?

The hw is the old ATI 9200SE, not too old to support shader, though. I am not using it for development, just for testing. It should support shaders.

Don’t forget that on Windows the advertised GL version is 1.1 with a bunch of extensions.

http://xorg.freedesktop.org/wiki/RadeonFeature

says that GLSL is unsupported on the card. Probably what is supported are the assembly style shaders.

I’ve actually managed to fix the problem by doing:

export LIBGL_ALWAYS_SOFTWARE=1

the fps are rather low now though :slight_smile:

Don’t forget that on Windows the advertised GL version is 1.1 with a bunch of extensions.

No it isn’t. If you have a hardware accelerated implementation version, you get the exact version being implemented.

What you don’t get is automatic binding for entrypoints for GL versions greater than 1.1. You have to query those function pointers manually, just like you would if you were loading the .dll by hand.

If you’re getting GL 1.3 reported by a driver, you can only get GL 1.3 entrypoints and extensions. If GL 3.2 is reported by a driver, then you can get GL 3.2 entrypoints and extensions.

I think GLSL could be supported even on that crappy card. GLSL or Cg could be presumably compiled into assembly… Maybe it would be possible to do this with Cg? Doesn’t Cg support even such assembly-style shaders?

Original assembly shaders where very very simple, and GLSL is way more powerful.
Nvidia continues (not sure if still true) to update its assembly language to make it almost as powerful as latest GLSL, and so Cg can compile GLSL to nvidia asm without loss of features.

Read the spec on for your asm here : http://oss.sgi.com/projects/ogl-sample/registry/ATI/fragment_shader.txt
It is 8 years old.

I use the old machine as a wireless router, not to develop on. GLSL 1.20 works perfectly even on this 5 year old relic via a software renderer.

I’ll try to compile some Cg code on the machine and see what happens.

I think GLSL could be supported even on that crappy card.

Well, you’re wrong. GLSL might be able to work as a vertex shader (with massive limitations), but this card doesn’t even support ARB_fragment_program. This is a pre-R300 card; there’s no reason to expect GLSL support on it.

The software mesa renderer does it’s magic very well. I’ve given up on hardware-accelerated GLSL. I don’t know about Cg yet, but I’ll revisit this thread after I find out.

Forget about Cg, it can not make GLSL-class shaders work on your hardware. If software GLSL slow speed suits you, it is better to stay with Mesa. And otherwise you will need a newer card.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.